<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
  <channel>
    <title>Time Traveler</title>
    <link>https://89douner.tistory.com/</link>
    <description>#Interest:  World History (The past)   #Work: Deep Learning (The future)   #Hobby: Music, Sports</description>
    <language>ko</language>
    <pubDate>Thu, 16 Apr 2026 17:54:19 +0900</pubDate>
    <generator>TISTORY</generator>
    <ttl>100</ttl>
    <managingEditor>Do-Woo-Ner</managingEditor>
    
    <item>
      <title>10.28 오랜만에 글을 올립니다 :)</title>
      <link>https://89douner.tistory.com/342</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;안녕하세요!&amp;nbsp; &lt;br /&gt;오랜&amp;nbsp;만에&amp;nbsp;글을&amp;nbsp;올리는것&amp;nbsp;같네요ㅎ &lt;br /&gt;&lt;br /&gt;대학원&amp;nbsp;박사과정을&amp;nbsp;시작하기&amp;nbsp;전까지는&amp;nbsp;블로그&amp;nbsp;글을&amp;nbsp;많이&amp;nbsp;올렸던&amp;nbsp;것&amp;nbsp;같은데,&amp;nbsp;박사과정이&amp;nbsp;시작되고는&amp;nbsp;글&amp;nbsp;올리는게&amp;nbsp;쉽지&amp;nbsp;않네요&amp;nbsp;ㅜ &lt;br /&gt;&lt;br /&gt;요즘에는&amp;nbsp;블로그&amp;nbsp;보다는&amp;nbsp;Notion을&amp;nbsp;이용하여&amp;nbsp;글을&amp;nbsp;작성하고&amp;nbsp;있는데,&amp;nbsp;많은&amp;nbsp;분들께&amp;nbsp;공유가&amp;nbsp;안된다는&amp;nbsp;점이&amp;nbsp;조금&amp;nbsp;아쉽네요ㅜ &lt;br /&gt;. &lt;br /&gt;. &lt;br /&gt;&lt;br /&gt;그래서, 이번 글에서는 간단하게 한 해 동안 무엇을 했는지 정리하고, 남은 기간에 무엇을 할지에 대해서 말씀드리려고합니다! (물론 궁금하진 않으시겠지만..ㅎㅎ)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;[2022년 경험한 것들]&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;1. MRI, CT 데이터 공부!&lt;/b&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;올해는 MRI, CT 관련 딥러닝 연구를 진행했습니다!&lt;/li&gt;
&lt;li&gt;처음 다루는 데이터다 보니 데이터의 배경지식 및 여러 전처리 방식들을 정리할 수 있어서 좋았어요ㅎ&amp;nbsp;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;1253&quot; data-origin-height=&quot;1207&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/b0tw2w/btrPEHqjgGL/i7puNs6L9AS75KAn9uQtV0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/b0tw2w/btrPEHqjgGL/i7puNs6L9AS75KAn9uQtV0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/b0tw2w/btrPEHqjgGL/i7puNs6L9AS75KAn9uQtV0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fb0tw2w%2FbtrPEHqjgGL%2Fi7puNs6L9AS75KAn9uQtV0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;469&quot; height=&quot;452&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;1253&quot; data-origin-height=&quot;1207&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;예를 들면, 딥러닝 학습을 위해 (brain) MRI nifti 파일을 어떻게 전처리 하면 좋은지, CT DICOM 파일에서 특정 병변을 더 잘 보이게 하기 위해 어떤 작업이 선행돼야하는지, 2D DICOM 데이터를 3D nifti 데이터로 변환시킬 때 어떤 부분을 주의해야하는지 등을 정리할 수 있었습니다!&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음2.png&quot; data-origin-width=&quot;1257&quot; data-origin-height=&quot;1222&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/b4yNrv/btrPFyGzTot/58pzEiKhIzDQuhKv81fUkK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/b4yNrv/btrPFyGzTot/58pzEiKhIzDQuhKv81fUkK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/b4yNrv/btrPFyGzTot/58pzEiKhIzDQuhKv81fUkK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fb4yNrv%2FbtrPFyGzTot%2F58pzEiKhIzDQuhKv81fUkK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;472&quot; height=&quot;459&quot; data-filename=&quot;제목 없음2.png&quot; data-origin-width=&quot;1257&quot; data-origin-height=&quot;1222&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;span style=&quot;letter-spacing: 0px;&quot;&gt;의료 영상 이미지는 일반 이미지와는 달리 여러 특성들이 meta 정보로 주어지기 때문에 이를 이용해 적절한 전처리 작업을 거쳐야 해요. 그래야, 딥러닝이 학습하기 좋은 데이터셋이 됩니다!&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;이렇게 잘 정리한 자료를 토대로 내년 부터는 본격적인 의료 딥러닝 리서치 및 연구를 진행할 예정이에요!! (물론, 전처리 관련 자료는 지속적으로 피드백 받으면서 업데이트 해나갈 예정입니다 ㅎ)&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;2. 대회는 또 달라!&lt;/b&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;span style=&quot;letter-spacing: 0px;&quot;&gt;올해부터는 처음으로 의료 인공지능 대회를 참가하기 시작했어요. &lt;/span&gt;&lt;/li&gt;
&lt;li&gt;대회를 참가하면서 좋았던 것은 실제 의료 현장에서 시급하게 다루어지는 문제들을 쉽게 접해 볼 수 있었어요.&lt;/li&gt;
&lt;li&gt;또한, 검증받은 public dataset으로 MRI, CT, 3D, 2D, segmentation, object detection, classification 과 같이 다양한 딥러닝 task를 빠른 시간내에 배우고 습득할 수 있었다는 점입니다.&lt;/li&gt;
&lt;li&gt;제 경우에는 대회 참가 시 팀을 구성하는데, 이번에 팀 단위의 작업을 하면서 어떠한 협업 시스템이 마련되면 좋은지를 느낄 수 있었습니다.&lt;/li&gt;
&lt;li&gt;&lt;span&gt;그래서, 관련 협업 개발도구를 사용할 수 있어서 개발능력도 향상 됐던것 같아요ㅎ&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;miccai2022-logo.png&quot; data-origin-width=&quot;576&quot; data-origin-height=&quot;184&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/eVauq9/btrPED9xzgA/wfuPiEjgHyYxbXBvk8YBP1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/eVauq9/btrPED9xzgA/wfuPiEjgHyYxbXBvk8YBP1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/eVauq9/btrPED9xzgA/wfuPiEjgHyYxbXBvk8YBP1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FeVauq9%2FbtrPED9xzgA%2FwfuPiEjgHyYxbXBvk8YBP1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;576&quot; height=&quot;184&quot; data-filename=&quot;miccai2022-logo.png&quot; data-origin-width=&quot;576&quot; data-origin-height=&quot;184&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;span style=&quot;letter-spacing: 0px;&quot;&gt;팀 단위의 협업 시스템을 통해 짧은 시간안에 다양한 실험을 할 수 있었고, 운 좋게도 MICCAI라는 국제 학회의 ISLES라는 challenge에서 2등을 수상할 수 있었습니다!!&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;해외학회에 참석하여 대회에 참석했던 많은 경쟁자들과 서로의 방법에 대해 논의 할 수 있었고, 새로운 동기부여들도 얻을 수 있어 좋았습니다. (해당 대회는 따로 글을 작성하도록 하겠습니다 ^^)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;KakaoTalk_20221024_085951604_01.jpg&quot; data-origin-width=&quot;1280&quot; data-origin-height=&quot;960&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/GiKJy/btrPEkCdad4/B8BeyStlp1OKW50gzaVvok/img.jpg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/GiKJy/btrPEkCdad4/B8BeyStlp1OKW50gzaVvok/img.jpg&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/GiKJy/btrPEkCdad4/B8BeyStlp1OKW50gzaVvok/img.jpg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FGiKJy%2FbtrPEkCdad4%2FB8BeyStlp1OKW50gzaVvok%2Fimg.jpg&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;489&quot; height=&quot;367&quot; data-filename=&quot;KakaoTalk_20221024_085951604_01.jpg&quot; data-origin-width=&quot;1280&quot; data-origin-height=&quot;960&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;3. 자료화와 협업은 필수!&lt;/b&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;대학원에서 성과를 내지 못하고 있을 때,&amp;nbsp; &quot;연구 능력이 없는것 같다&quot;, &quot;이런 기초지식도 없으면서 뭘 하겠다는건지&quot; 와 같은 생각을 하면 굉장히 힘들거에요 ㅜ&lt;/li&gt;
&lt;li&gt;하지만, 자신이 처한 문제를 오로지 개인의 탓으로 돌리는건 효과적인 문제 해결방식은 아닌것 같습니다.&lt;/li&gt;
&lt;li&gt;현재 겪고 있는 문제가 개인의 문제가 아닌 시스템의 문제일 수 도 있기 때문이죠.&lt;/li&gt;
&lt;li&gt;물론 개인의 노력이 문제를 해결하는 근본적인 방법이겠지만, 좋은 시스템이 갖춰져 있다면 문제 해결은 좀 더 빨라질거라 생각합니다!&lt;/li&gt;
&lt;li&gt;제가 축구를 좋아해서 가끔씩 생각 했던 부분이기도 한데, 예전에만해도 잘 나가는 팀의 기준은 얼마나 좋은 스타플레이어들을 보유하고 있느냐였던 것 같아요.&lt;/li&gt;
&lt;li&gt;하지만, 현대 축구로 넘어오면서 얼마나 좋은 감독을 보유하고 있는지가 중요해지기 시작했는데, 그 이유는 그 감독이 팀에 좋은 시스템을 심어 놓기 때문이라고 생각해요.&lt;/li&gt;
&lt;li&gt;저는 제가 연구하는 곳이 개인에 의존하기 보다는, 누가 들어와도 잘 성장할 수 있는 좋은 시스템을 보유한 곳이었으면 좋겠어요.&lt;/li&gt;
&lt;li&gt;그래서, 올해 저 또한 좋은 시스템을 구축하기 위해서 모두가 이해할 수 있는 자료를 만들고 협업을 위한 다양한 시도를 한 것 같습니다!&lt;/li&gt;
&lt;li&gt;내년에는&amp;nbsp;좀&amp;nbsp;더&amp;nbsp;갖춰진&amp;nbsp;시스템으로&amp;nbsp;연구성과에&amp;nbsp;속도를&amp;nbsp;가해볼까해요ㅎ&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;[11월, 12월 계획]&lt;/b&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;이제 올해도 2달 정도가 남은 것 같습니다.&lt;/li&gt;
&lt;li&gt;11월,&amp;nbsp;12월&amp;nbsp;계획은&amp;nbsp;지금까지&amp;nbsp;진행했던&amp;nbsp;의료&amp;nbsp;영상&amp;nbsp;관련&amp;nbsp;자료를&amp;nbsp;업데이트&amp;nbsp;하며,&amp;nbsp;의료&amp;nbsp;영상&amp;nbsp;전처리를&amp;nbsp;위한&amp;nbsp;모듈을&amp;nbsp;정리하려고&amp;nbsp;해요.&amp;nbsp;&lt;/li&gt;
&lt;li&gt;앞서 언급했던대로 내부 github을 만들어 의료 영상 처리의 기본 전처리 모듈을 만들 예정입니다.&lt;/li&gt;
&lt;li&gt;또한 대회에 참가했던 methodology를 논문화시키려고 하고 있습니다!&lt;/li&gt;
&lt;li&gt;추가적으로&amp;nbsp;이&amp;nbsp;분야를&amp;nbsp;모르지만&amp;nbsp;관심있는&amp;nbsp;분들을&amp;nbsp;초대해&amp;nbsp;정리한&amp;nbsp;내용들의&amp;nbsp;퀄리티&amp;nbsp;체크를&amp;nbsp;진행할&amp;nbsp;예정이에요.&amp;nbsp;이해가&amp;nbsp;잘&amp;nbsp;안되는&amp;nbsp;부분들은&amp;nbsp;보완하여&amp;nbsp;교육자료의&amp;nbsp;질을&amp;nbsp;향상시킬&amp;nbsp;예정입니다.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;[개인적인&amp;nbsp;홍보] &lt;/b&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;내년에도 역시 연구와 병행하면서 큰 국제 대회를 2개정도 나가려고 합니다!&lt;/li&gt;
&lt;li&gt;그 대회를 수행하려면 어느정도 의료 인공지능 연구 경험이 있어야 해요ㅜ&lt;/li&gt;
&lt;li&gt;저는 지금까지 팀원 구성을 특정 소속인원들만으로 구성해본적이 없습니다.&amp;nbsp;&lt;/li&gt;
&lt;li&gt;그래서 지금도 서울, 포항, 울산 등에서 대학원 및 스타트업 소속인원들과 작업을 하고 있습니다ㅎ (물론 의료 도메인을 전혀 모르시는 분들도 많으셨구요!)&lt;/li&gt;
&lt;li&gt;내년 대회 또는 앞으로 의료 인공지능 연구 또한 다양한 인원들과 스터디하여 진행하려고 합니다!&lt;/li&gt;
&lt;li&gt;그래서 의료 인공지능을 위한 스터디원(팀원)을 모집하려고 해요!&lt;/li&gt;
&lt;li&gt;개인적으로는 협업할 수 있는 자세를 갖고, 지속적으로 시간을 투자해 같이 공부할 수 있는 분들이 지원해주셨으면 좋겠어요ㅜ&lt;/li&gt;
&lt;li&gt;사실 고등학생이나 대학교 학부생 분들이 지원하셔도 크게 상관없습니다. 제가 팀을 구성하려는 기준은 다양성이라서요!&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;br /&gt;&lt;b&gt;[앞으로의&amp;nbsp;바램]&lt;/b&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;물론 많은 시간이 소요되는 작업들이지만, 이것이 모두를 성장 시킬 수 있는 가장 빠른 방법이고 시간이 지날 수록 연구속도를 가속화 시키고 좋은 성과를 뱉어내는데 필수적인 역할을 할 것으로 기대하고 있어요.&lt;/li&gt;
&lt;li&gt;스타트업이나 회사에서만 배울 수 있는것이 많은게 아니라, 대학원에서도 배울 수 있는게 많았으면 좋겠다는 마음가짐으로 스터디원을 꾸려나갈 생각입니다!&lt;/li&gt;
&lt;li&gt;이러한&amp;nbsp;협업&amp;nbsp;연구시스템을&amp;nbsp;구성하는게&amp;nbsp;또&amp;nbsp;하나의&amp;nbsp;좋은&amp;nbsp;연구&amp;nbsp;방법이다라는&amp;nbsp;생각이&amp;nbsp;들겠금&amp;nbsp;많은&amp;nbsp;연구성과로&amp;nbsp;보여줄&amp;nbsp;수&amp;nbsp;있도록&amp;nbsp;노력하겠습니다!&amp;nbsp;&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;[공지]&lt;/b&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;그리고&amp;nbsp;앞으로는&amp;nbsp;이&amp;nbsp;블로그에서&amp;nbsp;기술관련&amp;nbsp;내용&amp;nbsp;보다는&amp;nbsp;이렇게&amp;nbsp;개인적인&amp;nbsp;소식을&amp;nbsp;전달해&amp;nbsp;드리는&amp;nbsp;글을&amp;nbsp;작성할&amp;nbsp;것&amp;nbsp;같습니다! &lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt; [하고&amp;nbsp;싶은&amp;nbsp;말]&lt;/b&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;공부를 잘 하고 좋은 성과를 내는 것도 중요하지만, 연구하는 그 과정이 즐거웠으면 좋겠어요.&lt;/li&gt;
&lt;li&gt;모른다고 해서 무시받을게 걱정되는게 아니라, 모른다고 말할 수 있는 연구 문화가 됐으면 좋겠어요.&lt;/li&gt;
&lt;li&gt;개인의 성장은 누군가의 도움으로 이루어진 것이라고 생각했으면 좋겠어요.&amp;nbsp;&lt;/li&gt;
&lt;li&gt;그리고 이러한 이야기들이 듣기 좋은 이상적인 이야기가 아니라 현실적으로 공감하는 이야기가 됐으면 좋겠어요.&lt;/li&gt;
&lt;li&gt;단순히&amp;nbsp;어떠한&amp;nbsp;큰&amp;nbsp;포부를&amp;nbsp;갖고&amp;nbsp;있거나&amp;nbsp;신념이&amp;nbsp;있어서&amp;nbsp;하는&amp;nbsp;말이&amp;nbsp;아니라&amp;nbsp;그냥&amp;nbsp;그랬으면&amp;nbsp;좋겠어요. &lt;br /&gt;&lt;br /&gt;&lt;br /&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;P.S. 같이 의료 인공지능관련하여 연구하고 스터디 하시고 싶으신 분이 있으시면 아래 메일로 연락주세요!&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;89douner@gmail.com&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;그럼 연말에 한 번 다시 찾아뵙도록 하겠습니다!&lt;/p&gt;</description>
      <category>주절주절</category>
      <author>Do-Woo-Ner</author>
      <guid isPermaLink="true">https://89douner.tistory.com/342</guid>
      <comments>https://89douner.tistory.com/342#entry342comment</comments>
      <pubDate>Sat, 29 Oct 2022 10:00:04 +0900</pubDate>
    </item>
    <item>
      <title>2. Unsupervised pretraining (Greedy Layer-Wise)</title>
      <link>https://89douner.tistory.com/340</link>
      <description>&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;안녕하세요.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;지난 글&lt;/b&gt;에서는 &lt;b&gt;representation learning&lt;/b&gt;에 대해서 알아보았다면, &lt;b&gt;이번 글&lt;/b&gt;에서는 &lt;b&gt;unsupervised learning&lt;/b&gt;으로 학습시킨&lt;b&gt; representation model&lt;/b&gt;을 어떻게 &lt;b&gt;pretraining model&lt;/b&gt;로써 사용했는지 알아보도록 하겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;우리는 보통 &lt;b&gt;supervised learning 기반의 Deep Neural Network (DNN) 모델&lt;/b&gt;을 &lt;b&gt;deep supervised network&lt;/b&gt;라고 합니다. 이러한 deep supervised network는 이미지 분야, 시계열 분야 등 &lt;b&gt;다양한 task 도메인에 따라 CNN, RNN&lt;/b&gt; 같은 모델로 확장되어 왔죠. (기존 DNN이 naive한 모델이라고 한다면, &lt;b&gt;CNN과 RNN&lt;/b&gt; 같은 모델은 convolution, recurrence 와 같은 DNN에 비해 &lt;b&gt;&lt;span style=&quot;background-color: #ffffff; color: #000000;&quot;&gt;architectural&lt;/span&gt;&lt;span style=&quot;background-color: #ffffff;&quot;&gt; &lt;/span&gt;&lt;span style=&quot;background-color: #ffffff; color: #000000;&quot;&gt;sp&lt;/span&gt;&lt;/b&gt;&lt;span style=&quot;background-color: #ffffff; color: #000000;&quot;&gt;&lt;b&gt;ecializations 한 모델&lt;/b&gt;이라고 일컫습니다)&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;하지만, 이러한 CNN, RNN이 아닌 &lt;b&gt;오로지 fully connected layer로 구성된 DNN 방식&lt;/b&gt;으로 &lt;b&gt;supervised task&lt;/b&gt;에 잘 동작하게 만든 &lt;b&gt;학습 방법&lt;/b&gt;이 등장합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;983&quot; data-origin-height=&quot;414&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bAyfhB/btrrweLmT1z/WHn8rGUm1mJzVOVGGT1yM1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bAyfhB/btrrweLmT1z/WHn8rGUm1mJzVOVGGT1yM1/img.png&quot; data-alt=&quot;그림출처:&amp;amp;amp;amp;amp;amp;amp;amp;nbsp;https://paperswithcode.com/methods/category/convolutional-neural-networks&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bAyfhB/btrrweLmT1z/WHn8rGUm1mJzVOVGGT1yM1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbAyfhB%2FbtrrweLmT1z%2FWHn8rGUm1mJzVOVGGT1yM1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;672&quot; height=&quot;283&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;983&quot; data-origin-height=&quot;414&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;그림출처:&amp;amp;amp;amp;amp;amp;amp;nbsp;https://paperswithcode.com/methods/category/convolutional-neural-networks&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;i&gt;&lt;b&gt;&quot;이 학습방법은 unsupervised learning 방식으로 pretraining 모델을 만들고, 해당 pretraining model을 supervised task에 적용하는 것이었습니다.&quot;&lt;/b&gt;&lt;/i&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;위에서 설명한 학습 방법론들은 현재 굉장히 많은 분야에서 연구되고 있는데, 이번 글에서는 그 중 &lt;b&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;최초의 방식&lt;/span&gt;&lt;/b&gt;인 &lt;span style=&quot;color: #ee2323;&quot;&gt;&amp;nbsp;&lt;b&gt;'Greedy Layer-Wise Unsupervised Pretrainig'&lt;/b&gt;&lt;/span&gt;에 대해서 알아보려고 합니다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;※Greedy Layer-wise unsupervised training 관련 논문은 아래 논문을 참고해주세요!&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;Yoshua&amp;nbsp;Bengio,&amp;nbsp;Pascal&amp;nbsp;Lamblin,&amp;nbsp;Dan&amp;nbsp;Popovici,&amp;nbsp;and&amp;nbsp;Hugo&amp;nbsp;Larochelle.&amp;nbsp;&lt;b&gt;Greedy&amp;nbsp;layer-wise&amp;nbsp;training&amp;nbsp;of&amp;nbsp;deep&amp;nbsp;networks&lt;/b&gt;.&amp;nbsp;In&amp;nbsp;Bernhard&amp;nbsp;Sch&amp;uml;olkopf,&amp;nbsp;John&amp;nbsp;Platt,&amp;nbsp;and&amp;nbsp;Thomas&amp;nbsp;Hoffman,&amp;nbsp;editors,&amp;nbsp;Advances&amp;nbsp;in&amp;nbsp;Neural&amp;nbsp;Information&amp;nbsp;Processing&amp;nbsp;Systems&amp;nbsp;19&amp;nbsp;(NIPS'06),&amp;nbsp;pages&amp;nbsp;153&amp;ndash;160.&amp;nbsp;MIT&amp;nbsp;Press,&amp;nbsp;2007.&lt;/span&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;1. Greedy Layer-Wise Unsupervised Pretraining&lt;/h3&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;Greedy Layer-Wise Unsupervised Pretraining&lt;/b&gt; 방식은 &lt;b&gt;각 layer&lt;/b&gt;를 &lt;b&gt;순차적으&lt;/b&gt;로 &lt;b&gt;unsupervised learning&lt;/b&gt; 시키는 &lt;b&gt;학습 방법론&lt;/b&gt;이라고 볼 수 있습니다. 어떤 특정 layer를 B layer라고 했을 때, B layer의 이전 layer output 값을 취하여 B layer가 new representation (or feature)를 출력할 수 있도록 도와주는 방식입니다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;div&gt;&amp;nbsp;&lt;/div&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;Unsupervised-greedy-layer-wise-training-procedure.png&quot; data-origin-width=&quot;850&quot; data-origin-height=&quot;367&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/daJDJl/btrrun9FmEv/tKX1osF3GXTHOADfIn2E20/img.jpg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/daJDJl/btrrun9FmEv/tKX1osF3GXTHOADfIn2E20/img.jpg&quot; data-alt=&quot;그림 출처: https://www.researchgate.net/figure/Unsupervised-greedy-layer-wise-training-procedure_fig4_308818792&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/daJDJl/btrrun9FmEv/tKX1osF3GXTHOADfIn2E20/img.jpg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdaJDJl%2Fbtrrun9FmEv%2FtKX1osF3GXTHOADfIn2E20%2Fimg.jpg&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;507&quot; height=&quot;219&quot; data-filename=&quot;Unsupervised-greedy-layer-wise-training-procedure.png&quot; data-origin-width=&quot;850&quot; data-origin-height=&quot;367&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;그림 출처: https://www.researchgate.net/figure/Unsupervised-greedy-layer-wise-training-procedure_fig4_308818792&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;예를 들어, &lt;b&gt;20&amp;times;20 이미지 data (=400차원)&lt;/b&gt;가 &lt;b&gt;최초의 정보(information)&lt;/b&gt;로 주어졌다고 가정해보겠습니다. 그리고, &lt;b&gt;첫 번째 layer&lt;/b&gt;를 거쳐서 나오는 &lt;b&gt;neuron(=indepent vector)&lt;/b&gt;이&lt;b&gt; 300개(=300차원)&lt;/b&gt;라고 해보겠습니다. &lt;b&gt;최초의 정보인 400차원 데이터는 첫 번째 layer를 거쳐 가공된 300차원 feature(or representation)가 됩니다&lt;/b&gt;. 그리고, &lt;b&gt;첫 번째 layer를 거친 300차원 feature는 두 번째 layer 입장에서 보면 가공해야 할 또 다른 정보(=information)&lt;/b&gt;가 되는 것이죠. &lt;b&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;어찌됐든, 첫 번째 layer의 최초의 정보(=400차원)와 두 번째 layer의 정보(=300차원)는 성격이 다르기 때문에&lt;/span&gt;&lt;/b&gt;, 이들을 가공해서 얻게 되는 output인 feature(or representation)도 layer 마다 성격이 다르다고 할 수 있습니다.&amp;nbsp; 이러한 방식을 통해 각 layer 마다 고유의 representation을 갖게 되는 것이죠.&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;610&quot; data-origin-height=&quot;423&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/GcmFw/btrrwe6A3kB/zovwCKD30fAI6cvr75Tikk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/GcmFw/btrrwe6A3kB/zovwCKD30fAI6cvr75Tikk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/GcmFw/btrrwe6A3kB/zovwCKD30fAI6cvr75Tikk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FGcmFw%2Fbtrrwe6A3kB%2FzovwCKD30fAI6cvr75Tikk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;610&quot; height=&quot;423&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;610&quot; data-origin-height=&quot;423&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light'; color: #000000;&quot;&gt;&lt;b&gt;Greedy Layer-Wise Unsupervised Pretraining 알고리즘&lt;/b&gt;은 아래와 같습니다.&lt;/span&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;L: Unsupervised learning algorithm &amp;rarr; for 문에서 k가 하나씩 증가할 때, L 알고리즘을 통해 만들어진 \(f^{(k)}\)은 학습이 완료되었음을 가정 &amp;rarr; L을 통해 학습한 각 layer는 고유의 representation을 갖고 있음&amp;nbsp;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;참고로 이 책에서는 unsupervised learning algorithm에 대한 구체적인 description은 없고, 여러 논문들을 reference 해놓기만 했습니다.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Identity function: f(x) = x &amp;rarr; 입력값=출력값&amp;nbsp;&lt;/li&gt;
&lt;li&gt;Function composition: \(g\)&lt;span style=&quot;background-color: #ffffff; color: #202124;&quot;&gt;∘\(f\) = \(g(f(x))\)&amp;nbsp;&lt;/span&gt; &amp;rarr; \(f^{(k)}\)&lt;span style=&quot;background-color: #ffffff; color: #202124;&quot;&gt;∘\(f\) = \(f^{(k)}(f(x))\) &lt;/span&gt;&lt;span style=&quot;background-color: #ffffff; color: #202124;&quot;&gt;&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;background-color: #ffffff; color: #202124;&quot;&gt;Y: target(labeled) data&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;background-color: #ffffff; color: #202124;&quot;&gt;X: input data&lt;/span&gt;&lt;span style=&quot;background-color: #ffffff; color: #202124;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;background-color: #ffffff; color: #202124;&quot;&gt;&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;background-color: #ffffff; color: #202124;&quot;&gt;※ f: fine-tuning if문에서의 f는 이전 for문에 의해 \(f^{k=m}(f(f....(f^{1}(X))....))\)를 의미한다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;1054&quot; data-origin-height=&quot;718&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/caQlmE/btrrr6msqne/P1ahnGbQRl3leRrfvSStmk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/caQlmE/btrrr6msqne/P1ahnGbQRl3leRrfvSStmk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/caQlmE/btrrr6msqne/P1ahnGbQRl3leRrfvSStmk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcaQlmE%2Fbtrrr6msqne%2FP1ahnGbQRl3leRrfvSStmk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;529&quot; height=&quot;360&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;1054&quot; data-origin-height=&quot;718&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;앞서 이야기 했듯, 이전에는 FC layer 기반 (=jointly)의 DNN 모델을 처음부터 supervised task에 적용하는 것이 매우 힘들었지만, 위와 같은 학습 방법론 (=Greedy layer-wise Unsuperivsed pretraining)을 통해 DNN 모델을 supervised task에 적용할 수 있게 됐습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;Greedy layer-wise training procedures based on unsupervised criteria have long been used to sidestep the diﬃculty of jointly training the layers of a deep neural net for a supervised task.&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;1-1. &quot;Greedy layer-wise unsupervised&quot;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;위의 알고리즘을 &lt;b&gt;greedy layer-wise라고 부르는 이유&lt;/b&gt;를 간단히 &lt;b&gt;설명&lt;/b&gt;해보겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;우리는 어떤 &lt;b&gt;문제(=task)&lt;/b&gt;를&lt;b&gt; 해결&lt;/b&gt;하기 위해 &lt;b&gt;알고리즘(=solution)&lt;/b&gt;을 고안합니다. 그런데, 세상에는 완벽한 해결책이라는게 존재하지 않죠. 그렇기 때문에 어떤 경우에는 굉장히 복잡한 task를 작은 문제들로 나누고, 이러한 작은 문제들을 해결해 나가면서 최종 task의 해를 구하기도 합니다. 보통 이러한 알고리즘 해결법을 Dynamic programming이라고 하죠. 반면, 복잡한 task를 작은 문제들로 나누지 못하는 경우에는 &lt;b&gt;그때 그때 직면한 문제를 해결함으로써 최종 task의 해를 구해나가는데, 이 경우를 greedy algorithm&lt;/b&gt;이라고 합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;즉, &lt;b&gt;Greedy algorithm&lt;/b&gt;은 각각에 직면 하는 단계들에서 solution을 구하는 (=&lt;b&gt;each piece of the solution independently&lt;/b&gt;) 방법론이라고 할 수 있습니다.&amp;nbsp; &lt;b&gt;'Greedy layer-wise unsupervised pretrining' 관점&lt;/b&gt;에서 보면 &lt;b&gt;indepent pieces&lt;/b&gt;를 &lt;b&gt;각각의 layer&lt;/b&gt;라고 볼 수 있겠네요. 여기서&lt;b&gt; indepent라고 표현한 이유&lt;/b&gt;는 두 번째 layer를 unsupervised 방식시켜도, 첫 번째 layer가 학습되지 않기 때문에 '학습의 관점'에서는 layer들이 indepent하다고 할 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&quot;The&lt;span&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;lower&lt;span&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;layers&lt;span&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;(which&lt;span&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;are&lt;span&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;trained ﬁrst)&lt;span&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;are&lt;span&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;not&lt;span&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;adapted&lt;span&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;after&lt;span&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;the&lt;span&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;upper&lt;span&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;layers&lt;span&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;are&lt;span&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;introduced.&quot;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt; 학습 시킬 때는 첫 번째 layer가 별로(=layer-wise) unsupervised learning 방식으로 학습을 시킵니다. 입력 값이 마주하는 layer 마다 독립적으로 unsupervised task를 진행합니다. 즉, 각 layer 마다의 독립적인 solution을 구하는 것인데, 바꿔말하면 각 layer에 해당하는 최소 loss값을 구할 수 있도록 학습시키는 것이 목적입니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;Unsupervised-greedy-layer-wise-training-procedure.png&quot; data-origin-width=&quot;850&quot; data-origin-height=&quot;367&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/daJDJl/btrrun9FmEv/tKX1osF3GXTHOADfIn2E20/img.jpg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/daJDJl/btrrun9FmEv/tKX1osF3GXTHOADfIn2E20/img.jpg&quot; data-alt=&quot;그림 출처:&amp;amp;amp;amp;amp;amp;amp;amp;nbsp;https://www.researchgate.net/figure/Unsupervised-greedy-layer-wise-training-procedure_fig4_308818792&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/daJDJl/btrrun9FmEv/tKX1osF3GXTHOADfIn2E20/img.jpg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdaJDJl%2Fbtrrun9FmEv%2FtKX1osF3GXTHOADfIn2E20%2Fimg.jpg&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;507&quot; height=&quot;219&quot; data-filename=&quot;Unsupervised-greedy-layer-wise-training-procedure.png&quot; data-origin-width=&quot;850&quot; data-origin-height=&quot;367&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;그림 출처:&amp;amp;amp;amp;amp;amp;amp;nbsp;https://www.researchgate.net/figure/Unsupervised-greedy-layer-wise-training-procedure_fig4_308818792&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;1-2. &quot;Pretraining&quot;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;앞서 학습했던 &lt;b&gt;&quot;Greedy layer-wise Unsupervised&quot;&lt;/b&gt; 방식으로 학습한 &lt;b&gt;DNN&lt;/b&gt;은 결국 &lt;b&gt;supervised task에 (fine-tuning 되어) 사용&lt;/b&gt;됩니다. 이러한 관점으로 봤을 때, 이전 &lt;b&gt;Unsupervised 방식으로 학습된 DNN은 pretraining model&lt;/b&gt;이라고 볼 수 있죠.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;즉, &quot;Greedy layer-wise unsupervised pretraining model&quot;은 supervised task 관점에서 보면, weight initialization 또는 regularizer (to prevent overfitting) 역할을 하게 되는 겁니다.&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;2. Greedy Layer-Wise Unsupervised Pretraining for Unsupervised learning&lt;/h3&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;앞선 글에서는 &lt;b&gt;&quot;Greedy layer-wise unsupervised pretraining 모델&quot;&lt;/b&gt;이 supervised task에 fine-tuning하기 위해 사용된다고 언급했지만, 사실 &lt;b&gt;unsupervised task를 해결하기 위한 pretraining 모델로써도 사용&lt;/b&gt;이 됩니다. 정확히 어떻게 사용했는지는 아래 세 논문을 살펴보는 것을 추천합니다. &lt;span style=&quot;background-color: #ffffff; color: #000000;&quot;&gt;참고로 &lt;b&gt;아래 세 가지 모델&lt;/b&gt;에 대한 자세한 설명은 &lt;b&gt;Deeplearning book &quot;Chapter 20. Deep Generative model&quot;&lt;/b&gt;에서 다루는데,추후에 Chapter20에 대한 내용도 포스팅 하도록 하겠습니다.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Reducing the Dimensionality of Data with Neural Networks &amp;rarr; deep autoencoder&lt;span data-token-index=&quot;1&quot; data-reactroot=&quot;&quot;&gt; &lt;/span&gt;(by G. E. HINTON AND R. R. SALAKHUTDINOV; 2006)&lt;/li&gt;
&lt;li&gt;A Fast Learning Algorithm For Deep Belief Nets &amp;rarr; deep belief nets (by G. E. HINTON; 2006)&lt;/li&gt;
&lt;li&gt;Deep Boltzmann Machines (by &lt;span style=&quot;color: #000000;&quot;&gt;&lt;span style=&quot;background-color: #ffffff;&quot;&gt;Salakh&lt;/span&gt;&lt;span style=&quot;background-color: #ffffff;&quot;&gt;utdino&lt;/span&gt;&lt;span style=&quot;background-color: #ffffff;&quot;&gt;v&lt;/span&gt;&lt;span style=&quot;background-color: #ffffff;&quot;&gt; &lt;/span&gt;&lt;span style=&quot;background-color: #ffffff;&quot;&gt;and&lt;/span&gt;&lt;span style=&quot;background-color: #ffffff;&quot;&gt; &lt;/span&gt;&lt;span style=&quot;background-color: #ffffff;&quot;&gt;Hin&lt;/span&gt;&lt;span style=&quot;background-color: #ffffff;&quot;&gt;ton&lt;/span&gt;&lt;span style=&quot;background-color: #ffffff;&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;background-color: #ffffff;&quot;&gt;2009a&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;background-color: #ffffff; color: #000000;&quot;&gt;)&lt;/span&gt;&lt;span style=&quot;background-color: #ffffff; color: #000000;&quot;&gt;&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;지금까지 Unsupervised learning 방식으로 pretraining 모델을 만드는 최초의 학습 방법론에 대해서 설명해보았습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그럼 다음 글에서는 &lt;b&gt;&quot;Unsupervised learning 방식의 pretraining 모델을 언제 써야 하는지?&quot;&lt;/b&gt;에 대한 내용을 다루어보도록 하겠습니다.&lt;/span&gt;&lt;/p&gt;</description>
      <category>Representation Learning</category>
      <category>greedy algorithm</category>
      <category>greedy layer-wise</category>
      <category>pretraining model</category>
      <category>representation learning</category>
      <category>Unsupervised learning</category>
      <author>Do-Woo-Ner</author>
      <guid isPermaLink="true">https://89douner.tistory.com/340</guid>
      <comments>https://89douner.tistory.com/340#entry340comment</comments>
      <pubDate>Mon, 24 Jan 2022 11:07:47 +0900</pubDate>
    </item>
    <item>
      <title>1. Representation Learning 이란?</title>
      <link>https://89douner.tistory.com/339</link>
      <description>&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light'; font-size: 1.12em; letter-spacing: 0px;&quot;&gt;안녕하세요.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;이번글에서는 &lt;b&gt;representation learning&lt;/b&gt;이라는 개념에 대해서 설명하려고 합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;개인적으로 &lt;b&gt;2021년&lt;/b&gt; 동안 논문을 살펴보면서 &lt;b&gt;가장 눈에 많이 띄었던 용어&lt;/b&gt;가 &lt;b&gt;representation learning&lt;/b&gt; 이었습니다. &lt;/span&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;예를들어, GAN, self-supervised learning, transfer learning, domain adaptation 관련 논문들에서 자주 봤던 것 같네요. &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;1069&quot; data-origin-height=&quot;412&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/QlgXw/btrrdnIxnJA/N2Fc7RFOijOzXsFrEkJzPK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/QlgXw/btrrdnIxnJA/N2Fc7RFOijOzXsFrEkJzPK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/QlgXw/btrrdnIxnJA/N2Fc7RFOijOzXsFrEkJzPK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FQlgXw%2FbtrrdnIxnJA%2FN2Fc7RFOijOzXsFrEkJzPK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1069&quot; height=&quot;412&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;1069&quot; data-origin-height=&quot;412&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;이렇듯, 최근 &lt;b&gt;deep learning 모델&lt;/b&gt;들을&lt;b&gt; representation learning 관점&lt;/b&gt;에서 &lt;b&gt;설명&lt;/b&gt;하고 &lt;b&gt;해석&lt;/b&gt;하려는 경향이 많은 것 같아 이번글에서 &lt;b&gt;representation learning&lt;/b&gt;과 &lt;b&gt;deep learning&lt;/b&gt; 간의 &lt;b&gt;관계&lt;/b&gt;를 살펴보려고 합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;※해당 내용들은 아래 &lt;b&gt;deep learning book&lt;/b&gt;을 참고 했음을 말씀드립니다.&lt;/span&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;815&quot; data-origin-height=&quot;365&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/b52ZBM/btrq7VNMrIZ/GelQq1Qk0dqrRUatVgZWa1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/b52ZBM/btrq7VNMrIZ/GelQq1Qk0dqrRUatVgZWa1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/b52ZBM/btrq7VNMrIZ/GelQq1Qk0dqrRUatVgZWa1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fb52ZBM%2Fbtrq7VNMrIZ%2FGelQq1Qk0dqrRUatVgZWa1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;815&quot; height=&quot;365&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;815&quot; data-origin-height=&quot;365&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://www.deeplearningbook.org/contents/representation.html&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.deeplearningbook.org/contents/representation.html&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1641257164403&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;website&quot; data-og-title=&quot;https://www.deeplearningbook.org/contents/representation.html&quot; data-og-description=&quot;&quot; data-og-host=&quot;www.deeplearningbook.org&quot; data-og-source-url=&quot;https://www.deeplearningbook.org/contents/representation.html&quot; data-og-url=&quot;https://www.deeplearningbook.org/contents/representation.html&quot; data-og-image=&quot;&quot;&gt;&lt;a href=&quot;https://www.deeplearningbook.org/contents/representation.html&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://www.deeplearningbook.org/contents/representation.html&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url();&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;https://www.deeplearningbook.org/contents/representation.html&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;www.deeplearningbook.org&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;8f9ad28e3cfa759b17b716a4d00d934ab1f681bf.png&quot; data-origin-width=&quot;703&quot; data-origin-height=&quot;714&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/baxRDo/btrpz3eyXaf/qddJuBGwMdZVrPGqzPRS91/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/baxRDo/btrpz3eyXaf/qddJuBGwMdZVrPGqzPRS91/img.png&quot; data-alt=&quot;그림 출처: https://dmitry.ai/t/topic/175&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/baxRDo/btrpz3eyXaf/qddJuBGwMdZVrPGqzPRS91/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbaxRDo%2Fbtrpz3eyXaf%2FqddJuBGwMdZVrPGqzPRS91%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;468&quot; height=&quot;475&quot; data-filename=&quot;8f9ad28e3cfa759b17b716a4d00d934ab1f681bf.png&quot; data-origin-width=&quot;703&quot; data-origin-height=&quot;714&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;그림 출처: https://dmitry.ai/t/topic/175&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;1. Representation Learning이란?&lt;/h3&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;※Representation learning에 대한 개념을 사전적으로 정의하기 전에 &lt;b&gt;우선 몇 가지 예시&lt;/b&gt;들을 통해 &lt;b&gt;representation&lt;/b&gt;에 대한 &lt;b&gt;개념&lt;/b&gt;을 직관적으로 &lt;b&gt;이해&lt;/b&gt;해보도록 해보겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;1-1. 일반적인 관점에서의 representation&amp;nbsp;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;먼저, representation이란 개념을 일반적인 관점에서 살펴보겠습니다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;&lt;i&gt;&lt;span&gt;&quot;잠시 아래 이미지를 보고 5초 동안 생각해보세요!&quot;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;515&quot; data-origin-height=&quot;150&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/biiIiZ/btrrcgDkLxe/NNdOstjXqej5n2Srf8X6FK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/biiIiZ/btrrcgDkLxe/NNdOstjXqej5n2Srf8X6FK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/biiIiZ/btrrcgDkLxe/NNdOstjXqej5n2Srf8X6FK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbiiIiZ%2FbtrrcgDkLxe%2FNNdOstjXqej5n2Srf8X6FK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;515&quot; height=&quot;150&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;515&quot; data-origin-height=&quot;150&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;여러분은 위의 이미지를 보고 다음과 같은 생각을 하셨을 겁니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;&lt;i&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;음.. 나누기가 있네? &quot;나누기&quot;하란 뜻이구나!&quot;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그런데, 여기서 문제가 발생합니다. '나누기'를 하려고 하는데 로마(숫)자 표기가 되어있죠. 로마자를 이용하여 나누기를 하는건 생각보다 어려운 문제입니다. 그래서 여러분은 &lt;b&gt;무의식적으&lt;/b&gt;로 위의 &lt;b&gt;로마자&lt;/b&gt;를 &quot;0, 1, 2, ..., 9&quot;와 같은 &lt;b&gt;아라비아 숫자로 바꾸어 나누기&lt;/b&gt;를 하려고 할 것 입니다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span&gt;&quot;즉, 나누기를 하기 위해서는 아리비아 숫자로 표기하는 것이 좀 더 수월한 것이죠.&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그럼, 지금까지 이야기 한 내용을 좀 더 자세히 설명해보겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;우리는 보통 &lt;b&gt;어떤 task&lt;/b&gt;를 &lt;b&gt;해결&lt;/b&gt;하기 위해서 task와 관련된 &lt;b&gt;정보&lt;/b&gt;들을 &lt;b&gt;이용&lt;/b&gt;합니다. 예를 들어, &lt;b&gt;나누기(=task)&lt;/b&gt;를 하려고 하면 &quot;&lt;b&gt;수(=numeric)&quot;&lt;/b&gt;라는 &lt;b&gt;정보&lt;/b&gt;를 이용하죠. 하지만, 이러한 &lt;b&gt;&quot;수(=numeric)&quot;&lt;/b&gt;라는 정보들은 &lt;b&gt;다양하게 표현(=representation)&lt;/b&gt;될 수 있습니다. 수를 표현하는 방법에는 &lt;b&gt;'로마숫자표기(=Roman numerical representation)'&lt;/b&gt;, &lt;b&gt;'아라비아숫자표기(Arabic numerial representation)'&lt;/b&gt; 등이 있죠.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;793&quot; data-origin-height=&quot;445&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cmcZ5u/btrq9YiYaqU/w9fIvsldE9U7J4rkj5p6Kk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cmcZ5u/btrq9YiYaqU/w9fIvsldE9U7J4rkj5p6Kk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cmcZ5u/btrq9YiYaqU/w9fIvsldE9U7J4rkj5p6Kk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcmcZ5u%2Fbtrq9YiYaqU%2Fw9fIvsldE9U7J4rkj5p6Kk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;625&quot; height=&quot;351&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;793&quot; data-origin-height=&quot;445&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;보통&lt;b&gt; task&lt;/b&gt;들의 &lt;b&gt;난이도&lt;/b&gt;는 &lt;b&gt;정보&lt;/b&gt;들을 &lt;b&gt;어떻게 표현(=representation)&lt;/b&gt;해주느냐에 따라서 &lt;b&gt;결정&lt;/b&gt;이 됩니다. 즉, 정보들을 특정 task에 맞게 잘 표현(=representation)해주면 해당 task를 풀 수 있는 확률이 높아지는 것이죠. &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;예를들어, 우리가 수를 나눈다고 가정해보겠습니다. 이 때, 수(=numeric)라는 정보를 로마숫자(=Roman numerical representation)로 표현하면 어떻게 될까요? 다시 말해, &quot;CCX&amp;divide;VI&quot;라는 문제(=task)가 주어주면 풀기가(=해결하기가) 매우 힘들 것 입니다. 하지만, 나누기라는 task를 &quot;210&amp;divide;6&quot;과 같이 아라비아 숫자(=Arabic numerial representation)로 표현하면 금방 해결하실 수 있으실 겁니다. 왜냐하면 아라비아 숫자를 이용하면 &lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;'place-value system'&lt;/b&gt;&lt;/span&gt;을 이용할 수 있기 때문입니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&amp;ldquo;The Roman numeral system (I, II, III, IV,...) lacks an efficient way to represent place, and it makes simple arithmetic functions (&amp;lt;- ex: division) very difficult to perform for most people. So, we need a place-value system. &lt;span style=&quot;color: #ee2323;&quot;&gt;​&lt;b&gt;A&amp;nbsp;place-value&amp;nbsp;system&lt;/b&gt;&lt;/span&gt;&amp;nbsp;assigns&amp;nbsp;a&amp;nbsp;certain&amp;nbsp;value&amp;nbsp;to&amp;nbsp;the&amp;nbsp;spatial&amp;nbsp;location&amp;nbsp;of&amp;nbsp;a&amp;nbsp;number&amp;nbsp;in&amp;nbsp;a&amp;nbsp;series.​&amp;rdquo;​&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;1547028704_Place-Value-of-digits-in-numbers-and-decimal-numbers-Ones-tens-hundreds-thousands-through-millions.png&quot; data-origin-width=&quot;680&quot; data-origin-height=&quot;264&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/YZ42D/btrrfJEdNih/kC5oYFmkyOGdvGGiafryVK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/YZ42D/btrrfJEdNih/kC5oYFmkyOGdvGGiafryVK/img.png&quot; data-alt=&quot;이미지 출처:https://www.splashlearn.com/math-vocabulary/place-value/place-value&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/YZ42D/btrrfJEdNih/kC5oYFmkyOGdvGGiafryVK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FYZ42D%2FbtrrfJEdNih%2FkC5oYFmkyOGdvGGiafryVK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;680&quot; height=&quot;264&quot; data-filename=&quot;1547028704_Place-Value-of-digits-in-numbers-and-decimal-numbers-Ones-tens-hundreds-thousands-through-millions.png&quot; data-origin-width=&quot;680&quot; data-origin-height=&quot;264&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;이미지 출처:https://www.splashlearn.com/math-vocabulary/place-value/place-value&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(위에서 한 이야기를 아래와 같이 정리해볼 수 있겠네요!)&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;648&quot; data-origin-height=&quot;403&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/b33tUr/btrrbTn5TEU/113OrSAlScuqrKPezKtnF0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/b33tUr/btrrbTn5TEU/113OrSAlScuqrKPezKtnF0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/b33tUr/btrrbTn5TEU/113OrSAlScuqrKPezKtnF0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fb33tUr%2FbtrrbTn5TEU%2F113OrSAlScuqrKPezKtnF0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;648&quot; height=&quot;403&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;648&quot; data-origin-height=&quot;403&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;이렇듯, 우리가 나누기를 할 때, 수(=numeric)라는 정보를 보통 아라비아 숫자로 가공(=processing)하여 표현(=representation)하게 됩니다.&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;또 다른 예시&lt;/b&gt;로는 &lt;b&gt;알고리즘&lt;/b&gt;이 있습니다. (&lt;span style=&quot;color: #ee2323;&quot;&gt;이 부분은 자료구조의 개념이 필요하기 때문에 구체적으로 이해가 안되시는 분들은&lt;/span&gt; &lt;b&gt;&quot;linked list, binary tree search, 시간복잡도&quot;&lt;/b&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;와 관련된 개념들을 찾아 보시길 권유합니다&lt;/span&gt;. &lt;span style=&quot;color: #009a87;&quot;&gt;시간이나면 이 개념들에 대한 글을 따로 리포팅 하도록 하겠습니다&lt;/span&gt;.)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;우리에게 어떤 수들이 주어져있고, 이 &lt;b&gt;수들은 이미 오름차순으로 정렬&lt;/b&gt;이 되어 있다고 가정해보겠습니다. 이 때, 우리에게 &lt;b&gt;새로운 수 하나가 주어졌다&lt;/b&gt;고 했을 때, 그 수를 &lt;b&gt;올바른 위치에 삽입하는데 필요한 시간 복잡도&lt;/b&gt;가 어떻게 될까요? 이에 대한 답은 아래와 같습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;정렬된 수 들을 &lt;span style=&quot;color: #ee2323;&quot;&gt;어떻게 표현(=representation)&lt;/span&gt;해주느냐에 따라 다릅니다!&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;자료구조&lt;/b&gt;에서는 &lt;b&gt;정렬된 수&lt;/b&gt; 들을 아래와 같이 &lt;b&gt;두 가지 형태 (=linked list, binary tree) 형태&lt;/b&gt;로 &lt;b&gt;표현&lt;/b&gt;할 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;LinkedListToBST.png&quot; data-origin-width=&quot;862&quot; data-origin-height=&quot;408&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cy0QSA/btrrfotpfY6/Fs1h3ytuyisEK1MF6UvGs0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cy0QSA/btrrfotpfY6/Fs1h3ytuyisEK1MF6UvGs0/img.png&quot; data-alt=&quot;그림출처:https://www.geeksforgeeks.org/given-linked-list-representation-of-complete-tree-convert-it-to-linked-representation/&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cy0QSA/btrrfotpfY6/Fs1h3ytuyisEK1MF6UvGs0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fcy0QSA%2FbtrrfotpfY6%2FFs1h3ytuyisEK1MF6UvGs0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;685&quot; height=&quot;324&quot; data-filename=&quot;LinkedListToBST.png&quot; data-origin-width=&quot;862&quot; data-origin-height=&quot;408&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;그림출처:https://www.geeksforgeeks.org/given-linked-list-representation-of-complete-tree-convert-it-to-linked-representation/&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(참고로 교재에서 설명하는 red-black tree는 binary search tree의 일종인 self-balancing binary search tree라고 하는데, 이에 대한 자세한 설명은 생략하고 red-black tree를 binary search tree로만 표현하여 설명하도록 하겠습니다.)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;(&amp;darr;&amp;darr;&amp;darr; Linked List 에 대한 시간복잡도 설명 &amp;darr;&amp;darr;&amp;darr;)&amp;nbsp;&lt;/p&gt;
&lt;figure data-ke-type=&quot;video&quot; data-ke-style=&quot;alignCenter&quot; data-video-host=&quot;youtube&quot; data-video-url=&quot;https://www.youtube.com/watch?v=DzGnME1jIwY&quot; data-video-thumbnail=&quot;https://scrap.kakaocdn.net/dn/zZxgy/hyM9p5bOxq/BI9hO5CJyxeSDZjbUK3sl0/img.jpg?width=1280&amp;amp;height=720&amp;amp;face=0_0_1280_720&quot; data-video-width=&quot;860&quot; data-video-height=&quot;484&quot; data-video-origin-width=&quot;860&quot; data-video-origin-height=&quot;484&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;iframe src=&quot;https://www.youtube.com/embed/DzGnME1jIwY&quot; width=&quot;860&quot; height=&quot;484&quot; frameborder=&quot;&quot; allowfullscreen=&quot;true&quot;&gt;&lt;/iframe&gt;
&lt;figcaption&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;a href=&quot;https://yongkis.tistory.com/22&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://yongkis.tistory.com/22&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1642641467994&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;자료 구조에 따른 시간 복잡도 비교 - linked list편&quot; data-og-description=&quot;오늘은 지난 배열편에 이어 linked list(연결 리스트)라고 불리는 자료구조에 대해서 시간 복잡도와 연관해서 분석해보고, 현실에서 어떤 것과 닮아있는지를 통해서 심화학습 해보도록 하겠습니다&quot; data-og-host=&quot;yongkis.tistory.com&quot; data-og-source-url=&quot;https://yongkis.tistory.com/22&quot; data-og-url=&quot;https://yongkis.tistory.com/22&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/cJwJm5/hyM9tmdFCl/4mEYaTMjDVWqsYCkk0dR31/img.png?width=800&amp;amp;height=403&amp;amp;face=0_0_800_403,https://scrap.kakaocdn.net/dn/cvHpaO/hyM9n0BE0o/HznQsEB8lqPHwC4NO02SW0/img.png?width=800&amp;amp;height=403&amp;amp;face=0_0_800_403,https://scrap.kakaocdn.net/dn/bmVU50/hyM9h7ajOa/HKo8EIQ4mtnqAU6OhRJDK1/img.jpg?width=1500&amp;amp;height=2000&amp;amp;face=387_684_583_899&quot;&gt;&lt;a href=&quot;https://yongkis.tistory.com/22&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://yongkis.tistory.com/22&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/cJwJm5/hyM9tmdFCl/4mEYaTMjDVWqsYCkk0dR31/img.png?width=800&amp;amp;height=403&amp;amp;face=0_0_800_403,https://scrap.kakaocdn.net/dn/cvHpaO/hyM9n0BE0o/HznQsEB8lqPHwC4NO02SW0/img.png?width=800&amp;amp;height=403&amp;amp;face=0_0_800_403,https://scrap.kakaocdn.net/dn/bmVU50/hyM9h7ajOa/HKo8EIQ4mtnqAU6OhRJDK1/img.jpg?width=1500&amp;amp;height=2000&amp;amp;face=387_684_583_899');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;자료 구조에 따른 시간 복잡도 비교 - linked list편&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;오늘은 지난 배열편에 이어 linked list(연결 리스트)라고 불리는 자료구조에 대해서 시간 복잡도와 연관해서 분석해보고, 현실에서 어떤 것과 닮아있는지를 통해서 심화학습 해보도록 하겠습니다&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;yongkis.tistory.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;(&amp;darr;&amp;darr;&amp;darr; Binary Search Tree에 대한 시간복잡도 설명 &amp;darr;&amp;darr;&amp;darr;)&amp;nbsp;&lt;/p&gt;
&lt;figure data-ke-type=&quot;video&quot; data-ke-style=&quot;alignCenter&quot; data-video-host=&quot;youtube&quot; data-video-url=&quot;https://www.youtube.com/watch?v=xxADG17SveY&quot; data-video-thumbnail=&quot;https://scrap.kakaocdn.net/dn/1D7wR/hyM9vYC38A/120K864t9Q3CvDqpngFs20/img.jpg?width=1280&amp;amp;height=720&amp;amp;face=0_0_1280_720&quot; data-video-width=&quot;860&quot; data-video-height=&quot;484&quot; data-video-origin-width=&quot;860&quot; data-video-origin-height=&quot;484&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;iframe src=&quot;https://www.youtube.com/embed/xxADG17SveY&quot; width=&quot;860&quot; height=&quot;484&quot; frameborder=&quot;&quot; allowfullscreen=&quot;true&quot;&gt;&lt;/iframe&gt;
&lt;figcaption&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Red%E2%80%93black_tree&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://en.wikipedia.org/wiki/Red%E2%80%93black_tree&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1642639616521&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;website&quot; data-og-title=&quot;Red&amp;ndash;black tree - Wikipedia&quot; data-og-description=&quot;From Wikipedia, the free encyclopedia Jump to navigation Jump to search Self-balancing binary search tree data structure In computer science, a red&amp;ndash;black tree is a kind of self-balancing binary search tree. Each node stores an extra bit representing &amp;quot;col&quot; data-og-host=&quot;en.wikipedia.org&quot; data-og-source-url=&quot;https://en.wikipedia.org/wiki/Red%E2%80%93black_tree&quot; data-og-url=&quot;https://en.wikipedia.org/wiki/Red%E2%80%93black_tree&quot; data-og-image=&quot;&quot;&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Red%E2%80%93black_tree&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://en.wikipedia.org/wiki/Red%E2%80%93black_tree&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url();&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;Red&amp;ndash;black tree - Wikipedia&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;From Wikipedia, the free encyclopedia Jump to navigation Jump to search Self-balancing binary search tree data structure In computer science, a red&amp;ndash;black tree is a kind of self-balancing binary search tree. Each node stores an extra bit representing &quot;col&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;en.wikipedia.org&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;결국 우리에게 주어진 수를 &lt;span style=&quot;color: #ee2323;&quot;&gt;어떠한 자료구조 형태 (ex: linked list or binary search tree)&lt;/span&gt; 로 &lt;span style=&quot;color: #ee2323;&quot;&gt;표현(=representation)&lt;/span&gt;하느냐에 따라서 &quot;정렬된 수에 특정 수를 삽입(=insert)&quot;하는 task의 시간복잡도(=난이도)가 결정됩니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;1_jiVqYhDzvODfVq6RH0DB1g.png&quot; data-origin-width=&quot;1132&quot; data-origin-height=&quot;776&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/OH0MR/btrq7HPbEvP/WwFkSrkLkAWsJM1MYc4p50/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/OH0MR/btrq7HPbEvP/WwFkSrkLkAWsJM1MYc4p50/img.png&quot; data-alt=&quot;이미지 출처:https://callmedevmomo.medium.com/%EC%9B%B9-%EA%B0%9C%EB%B0%9C%EC%9E%90%EB%A5%BC-%EC%9C%84%ED%95%9C-%EC%9E%90%EB%A3%8C%EA%B5%AC%EC%A1%B0%EC%99%80-%EC%95%8C%EA%B3%A0%EB%A6%AC%EC%A6%98-01-%EB%B9%85%EC%98%A4-%ED%91%9C%EA%B8%B0%EB%B2%95-ff369f0efc1d&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/OH0MR/btrq7HPbEvP/WwFkSrkLkAWsJM1MYc4p50/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FOH0MR%2Fbtrq7HPbEvP%2FWwFkSrkLkAWsJM1MYc4p50%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;587&quot; height=&quot;402&quot; data-filename=&quot;1_jiVqYhDzvODfVq6RH0DB1g.png&quot; data-origin-width=&quot;1132&quot; data-origin-height=&quot;776&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;이미지 출처:https://callmedevmomo.medium.com/%EC%9B%B9-%EA%B0%9C%EB%B0%9C%EC%9E%90%EB%A5%BC-%EC%9C%84%ED%95%9C-%EC%9E%90%EB%A3%8C%EA%B5%AC%EC%A1%B0%EC%99%80-%EC%95%8C%EA%B3%A0%EB%A6%AC%EC%A6%98-01-%EB%B9%85%EC%98%A4-%ED%91%9C%EA%B8%B0%EB%B2%95-ff369f0efc1d&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;지금까지 이야기한 내용들을 함축적으로 정리하면 아래와 같습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;결국 우리는 어떤 task를 해결할 때, 정보(=information)를 어떻게 가공(processing)하여 표현(=representation)해줄지에 따라서 task의 난이도가 결정이 된다.&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;(&amp;darr;&amp;darr;&amp;darr; Deep learning book 본문 내용&amp;darr;&amp;darr;&amp;darr;)&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;Many information processing tasks can be very easy or very difficult depending on how the information is represented.&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그렇다면 &lt;span&gt;어떻게&lt;span&gt; &amp;nbsp;&lt;/span&gt;&lt;/span&gt;딥러닝 모델에서 representation이란 개념은 어떻게 이해해야 할까요?&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;2. 딥러닝 관점에서의 representation 이란? (by Supervised learning 관점)&lt;/h3&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;2-1. What&amp;nbsp;is&amp;nbsp;representation&amp;nbsp;in&amp;nbsp;supervised&amp;nbsp;training&amp;nbsp;of&amp;nbsp;feedforward&amp;nbsp;networks?​&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;먼저, &lt;b&gt;Deep Neural Network (DNN)&lt;/b&gt; 를 살펴보겠습니다. 우리가 DNN을 사용하는 이유중 하나는 &lt;b&gt;non-linear problem을 linear classifier로 풀수 있기&lt;/b&gt; 때문입니다.&amp;nbsp; (자세한 설명은 아래 글을 참고해주세요!)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;a href=&quot;https://89douner.tistory.com/23?category=868069&quot;&gt;https://89douner.tistory.com/23?category=868069&lt;/a&gt;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1641342570750&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;5.Multi-Layer Perceptron (MLP), Universal Theorem&quot; data-og-description=&quot;Q. 왜 단층 Perceptron 모델에서 Layer를 추가하게 되었나요? Q. Universal Approximation Theorem은 뭔가요? 2~4장까지 배웠던 부분을 아래와 같이 하나의 그림으로 요약을 할 수 있습니다. 1.입력값들이 가중치.&quot; data-og-host=&quot;89douner.tistory.com&quot; data-og-source-url=&quot;https://89douner.tistory.com/23?category=868069&quot; data-og-url=&quot;https://89douner.tistory.com/23&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/bef0Be/hyMYTr1KcK/T0PDKoG3kIEakoP7ODVAW0/img.jpg?width=800&amp;amp;height=577&amp;amp;face=0_0_800_577,https://scrap.kakaocdn.net/dn/dGE1BI/hyMYNL41EY/Y0hLFbq0fF8s7exVhSVd2K/img.jpg?width=800&amp;amp;height=577&amp;amp;face=0_0_800_577,https://scrap.kakaocdn.net/dn/c6FiXn/hyMYOxrSHT/jVIvmSQ3aCr5tsBl1PZyjK/img.png?width=900&amp;amp;height=368&amp;amp;face=0_0_900_368&quot;&gt;&lt;a href=&quot;https://89douner.tistory.com/23?category=868069&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://89douner.tistory.com/23?category=868069&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/bef0Be/hyMYTr1KcK/T0PDKoG3kIEakoP7ODVAW0/img.jpg?width=800&amp;amp;height=577&amp;amp;face=0_0_800_577,https://scrap.kakaocdn.net/dn/dGE1BI/hyMYNL41EY/Y0hLFbq0fF8s7exVhSVd2K/img.jpg?width=800&amp;amp;height=577&amp;amp;face=0_0_800_577,https://scrap.kakaocdn.net/dn/c6FiXn/hyMYOxrSHT/jVIvmSQ3aCr5tsBl1PZyjK/img.png?width=900&amp;amp;height=368&amp;amp;face=0_0_900_368');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;5.Multi-Layer Perceptron (MLP), Universal Theorem&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;Q. 왜 단층 Perceptron 모델에서 Layer를 추가하게 되었나요? Q. Universal Approximation Theorem은 뭔가요? 2~4장까지 배웠던 부분을 아래와 같이 하나의 그림으로 요약을 할 수 있습니다. 1.입력값들이 가중치.&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;89douner.tistory.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;1103&quot; data-origin-height=&quot;511&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/kNeU1/btrrch98qEH/2RmHtIsIG9FfwbfQOLGc70/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/kNeU1/btrrch98qEH/2RmHtIsIG9FfwbfQOLGc70/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/kNeU1/btrrch98qEH/2RmHtIsIG9FfwbfQOLGc70/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FkNeU1%2Fbtrrch98qEH%2F2RmHtIsIG9FfwbfQOLGc70%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1103&quot; height=&quot;511&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;1103&quot; data-origin-height=&quot;511&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;여기서의 &lt;b&gt;task&lt;/b&gt;는 &lt;b&gt;linear classifier&lt;/b&gt;를 이용한 &lt;b&gt;classification task&lt;/b&gt;입니다. 그리고, &lt;b&gt;최초의 정보(information)&lt;/b&gt;라고 할 수 있는 &lt;b&gt;DNN의 input 값&lt;/b&gt;들(=다양한 독립변수들: x1, x2, ..., xn)을 &lt;b&gt;neural network로 가공(=processing)&lt;/b&gt;하여 &lt;b&gt;'learned h space'로 표현(=representation)&lt;/b&gt;해 줍니다. &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;441&quot; data-origin-height=&quot;324&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bF9opK/btrq8JzC0JG/S6itKkcvCVii9vtrcD72Mk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bF9opK/btrq8JzC0JG/S6itKkcvCVii9vtrcD72Mk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bF9opK/btrq8JzC0JG/S6itKkcvCVii9vtrcD72Mk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbF9opK%2Fbtrq8JzC0JG%2FS6itKkcvCVii9vtrcD72Mk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;441&quot; height=&quot;324&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;441&quot; data-origin-height=&quot;324&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;즉&quot;, 최초의 input 값들을 &lt;span&gt;linear classifier가 올바르게 classification 할 수 있도록 'learned h space'로 &lt;/span&gt;표현(=representation)해준 것이죠.&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;2-2.&lt;span&gt;&amp;nbsp;&lt;/span&gt;What is representation in supervised&amp;nbsp;training&amp;nbsp;of&amp;nbsp;feedforward&amp;nbsp;networks?​&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;이미지&lt;/b&gt;를 &lt;b&gt;CNN&lt;/b&gt;으로 &lt;b&gt;분류&lt;/b&gt;하는 문제도 동일하게 생각해 볼 수 있습니다.&lt;b&gt; Supervised learning 기반의 이미지 분류&lt;/b&gt;에서도 흔히&lt;b&gt; softmax (linear classifier)&lt;/b&gt;를 &lt;b&gt;이용&lt;/b&gt;해 &lt;b&gt;최종 분류&lt;/b&gt;를 하게 됩니다. &lt;span style=&quot;color: #409d00;&quot;&gt;(Softmax가 왜 linear classifier인지는 뒤에서 설명하도록 하겠습니다)&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;즉, 이미지를 linear classifier로 classification 하려는 task인 셈이죠.&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그런데, 일반적으로 DNN(=Deep Neural Network) 구조는 이미지 분류 성능이 좋지 않다고 알려져 있습니다 (자세한 설명은 아래 글을 참고해주세요!)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;a href=&quot;https://89douner.tistory.com/58?category=873854&quot;&gt;https://89douner.tistory.com/58?category=873854&lt;/a&gt;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1641343795454&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;4. CNN은 왜 이미지영역에서 두각을 나타나게 된건가요?&quot; data-og-description=&quot;안녕하세요~ 이번 시간에는 DNN의 단점을 바탕으로 CNN이 왜 이미지 영역에서 더 좋은 성과를 보여주는지 말씀드릴거에요~ 1) Weight(가중치) parameter 감소 (가중치 parameter가 많으면 안되는 이유를 참&quot; data-og-host=&quot;89douner.tistory.com&quot; data-og-source-url=&quot;https://89douner.tistory.com/58?category=873854&quot; data-og-url=&quot;https://89douner.tistory.com/58&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/pHRHT/hyMYJQt1SK/oRpkOXK6EhkL2w3sPkhjK0/img.png?width=750&amp;amp;height=274&amp;amp;face=0_0_750_274,https://scrap.kakaocdn.net/dn/uh2g2/hyMYHrzlkm/ApBmQjiSsH6xISLmw6i5dk/img.png?width=750&amp;amp;height=274&amp;amp;face=0_0_750_274,https://scrap.kakaocdn.net/dn/8rHMS/hyMYRnsibO/xSBosYqkv3cRelfWsjR3Lk/img.png?width=900&amp;amp;height=257&amp;amp;face=0_0_900_257&quot;&gt;&lt;a href=&quot;https://89douner.tistory.com/58?category=873854&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://89douner.tistory.com/58?category=873854&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/pHRHT/hyMYJQt1SK/oRpkOXK6EhkL2w3sPkhjK0/img.png?width=750&amp;amp;height=274&amp;amp;face=0_0_750_274,https://scrap.kakaocdn.net/dn/uh2g2/hyMYHrzlkm/ApBmQjiSsH6xISLmw6i5dk/img.png?width=750&amp;amp;height=274&amp;amp;face=0_0_750_274,https://scrap.kakaocdn.net/dn/8rHMS/hyMYRnsibO/xSBosYqkv3cRelfWsjR3Lk/img.png?width=900&amp;amp;height=257&amp;amp;face=0_0_900_257');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;4. CNN은 왜 이미지영역에서 두각을 나타나게 된건가요?&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;안녕하세요~ 이번 시간에는 DNN의 단점을 바탕으로 CNN이 왜 이미지 영역에서 더 좋은 성과를 보여주는지 말씀드릴거에요~ 1) Weight(가중치) parameter 감소 (가중치 parameter가 많으면 안되는 이유를 참&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;89douner.tistory.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;앞서 &quot;DNN 구조는 이미지 분류 성능이 좋지 않다&quot;고 언급한 부분은 아래와 같이 다르게 해석해 볼 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;DNN 구조는 (linear classifier를 이용한) 이미지 분류 task를 하기 위해 최초의 정보들을 부적절하게 표현(=representation) 해준다&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그래서, DNN과 다르게 &lt;b&gt;visual feature&lt;/b&gt;들을 잘 뽑아(or 표현해) 낼 수 있는 &lt;span&gt;&lt;b&gt;Convolutional Neural Network (CNN)&lt;/b&gt;가 도입이 되게 됩니다&lt;/span&gt;. CNN은 &lt;span&gt;&lt;b&gt;convolutional filter&lt;/b&gt;를 이용한 &lt;b&gt;feature extractor&lt;/b&gt;를 갖는&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;데, 이러한 feature extractor 덕분에 &lt;b&gt;softmax linear classifier&lt;/b&gt;가 이미지를 &lt;b&gt;잘 classification&lt;/b&gt; 해줄 수 있도록 &lt;span style=&quot;color: #ee2323;&quot;&gt;최초 정보(=input 이미지)를 가공(=processing)하여 표현(=representation) 해주게 되는 것&lt;/span&gt;이죠. &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;CNN achieves good representation for image classification using softmax linear classifier&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;그림17.png&quot; data-origin-width=&quot;1675&quot; data-origin-height=&quot;901&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dNhLoc/btrpDLFzORM/O2tZDdtmwtyRfjWasHDEkk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dNhLoc/btrpDLFzORM/O2tZDdtmwtyRfjWasHDEkk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dNhLoc/btrpDLFzORM/O2tZDdtmwtyRfjWasHDEkk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdNhLoc%2FbtrpDLFzORM%2FO2tZDdtmwtyRfjWasHDEkk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1675&quot; height=&quot;901&quot; data-filename=&quot;그림17.png&quot; data-origin-width=&quot;1675&quot; data-origin-height=&quot;901&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;Input feature(=원본 이미지)&lt;/b&gt;가 &lt;b&gt;CNN&lt;/b&gt;에 입력되면 다양한 &lt;b&gt;conv layer에 의해 hierarchical feature map&lt;/b&gt;들이 얻어집니다. Task에 따라 feature extractor를 거쳐 얻은 최종 feature(=feature classifier에 입력되기 직전)의 표현(=representation)들이 달라질 것입니다. 예를 들어, classification task이면 classification에 맞는 최종 feature가 representation 될 것이고, segmentation task이면 segmentation에 맞는 최종 feature가 representation이 될 것 입니다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;책에서는 &lt;b&gt;subsequent learning task&lt;/b&gt;라고 언급하는 부분이 있는데, 이것이&lt;b&gt; 최종 task의 종류들(=ex: classification task, segmentation task, linear clasifier task, etc...)&lt;/b&gt; 을 의미하는 듯 합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;Generally speaking, a good representation is one that makes a subsequent learning task easier. The choice of representation will usually depend on the choice of the subsequent learning task.&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;Learning-hierarchy-of-visual-features-in-CNN-architecture.png&quot; data-origin-width=&quot;850&quot; data-origin-height=&quot;482&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/5UDh0/btrpRWMb94s/ggL7auRdKLAmk2zhPtkNW1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/5UDh0/btrpRWMb94s/ggL7auRdKLAmk2zhPtkNW1/img.png&quot; data-alt=&quot;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;이미지 출처: https://www.researchgate.net/figure/Learning-hierarchy-of-visual-features-in-CNN-architecture_fig1_281607765&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/5UDh0/btrpRWMb94s/ggL7auRdKLAmk2zhPtkNW1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F5UDh0%2FbtrpRWMb94s%2FggL7auRdKLAmk2zhPtkNW1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;850&quot; height=&quot;482&quot; data-filename=&quot;Learning-hierarchy-of-visual-features-in-CNN-architecture.png&quot; data-origin-width=&quot;850&quot; data-origin-height=&quot;482&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;이미지 출처: https://www.researchgate.net/figure/Learning-hierarchy-of-visual-features-in-CNN-architecture_fig1_281607765&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;결국 DNN, CNN 모두 최종 task의 유형에 따라 &lt;b&gt;'new representation'&lt;/b&gt;에 해당하는 &lt;b&gt;new feature(&amp;lt;--&amp;gt; input feature)&lt;/b&gt;를 출력하게 됩니다. &lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;이러한 'new representation'을 뽑게 학습하는 것을 representation learning이라고 부르는 것이죠.&lt;/b&gt;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;DFN1.png&quot; data-origin-width=&quot;704&quot; data-origin-height=&quot;396&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/CVlko/btrrcOmzitX/MkV9JPEzmHjD8gBNVyYwIK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/CVlko/btrrcOmzitX/MkV9JPEzmHjD8gBNVyYwIK/img.png&quot; data-alt=&quot;이미지 출처:https://srdas.github.io/DLBook/NNDeepLearning.html&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/CVlko/btrrcOmzitX/MkV9JPEzmHjD8gBNVyYwIK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FCVlko%2FbtrrcOmzitX%2FMkV9JPEzmHjD8gBNVyYwIK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;704&quot; height=&quot;396&quot; data-filename=&quot;DFN1.png&quot; data-origin-width=&quot;704&quot; data-origin-height=&quot;396&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;이미지 출처:https://srdas.github.io/DLBook/NNDeepLearning.html&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;좀 더 &lt;b&gt;구체적인 예시&lt;/b&gt;를 들어보겠습니다. &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;먼저, 우리가 &lt;b&gt;24&amp;times;24 이미지&lt;/b&gt;를 갖고 &lt;b&gt;네 가지 클래스&lt;/b&gt;를 &lt;b&gt;분류&lt;/b&gt;한다고 해보겠습니다. &lt;/span&gt;&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;Blue Square&lt;/li&gt;
&lt;li&gt;Blue Circle&lt;/li&gt;
&lt;li&gt;Red Square&lt;/li&gt;
&lt;li&gt;Red Circle&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;이 때,&lt;b&gt; 576&lt;span&gt;(=&lt;/span&gt;&lt;span&gt;24&amp;times;24)개&lt;/span&gt;&lt;/b&gt;의 &lt;b&gt;입력 feature&lt;/b&gt;로 부터 뽑을 수 있는 &lt;b&gt;가장 이상적인 (hidden) feature vector (=indepent vectors)&lt;/b&gt;를 뽑으라고 한다면, 아래와 같이 &lt;b&gt;'color', 'shape' 2차원 (hidden) feature vector&lt;/b&gt;가 될 것 입니다. 그리고 이러한 feature vector를 &lt;b&gt;'new representation'&lt;/b&gt;으로 볼 수 있겠네요. &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;&lt;i&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;하지만, 'new representation'을 추출하는 network(=모델)가 무엇이냐에 따라서 달라집니다.&quot;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;아래 왼쪽 이미지&lt;/b&gt;에서 &lt;b&gt;'entangled space'&lt;/b&gt;를 &lt;b&gt;DNN&lt;/b&gt;을 이용하여 얻은 &lt;b&gt;(hidden) feature vector&lt;/b&gt;들이라고 가정한다면, DNN으로는 &lt;b&gt;'blue circles'를 linear classifier로 구분하기 힘듭&lt;/b&gt;니다. 반대로, &lt;b&gt;오른쪽 'disentangled space'&lt;/b&gt;를 &lt;b&gt;CNN&lt;/b&gt;을 통해 얻은 &lt;b&gt;(hiddent) feature vector&lt;/b&gt;들이라고 한다면, &lt;b&gt;4가지 클래스 모두 linear classifier로 충분히 분류&lt;/b&gt;할 수 있게 됩니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;다운로드.png&quot; data-origin-width=&quot;301&quot; data-origin-height=&quot;168&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/m21Wu/btrrdXJWQ4O/M2dMZv5GwowCH99ZgrF8P1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/m21Wu/btrrdXJWQ4O/M2dMZv5GwowCH99ZgrF8P1/img.png&quot; data-alt=&quot;이미지 출처:&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;nbsp;https://arxiv.org/abs/2007.06356&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/m21Wu/btrrdXJWQ4O/M2dMZv5GwowCH99ZgrF8P1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fm21Wu%2FbtrrdXJWQ4O%2FM2dMZv5GwowCH99ZgrF8P1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;396&quot; height=&quot;221&quot; data-filename=&quot;다운로드.png&quot; data-origin-width=&quot;301&quot; data-origin-height=&quot;168&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;이미지 출처:&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;nbsp;https://arxiv.org/abs/2007.06356&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;즉, CNN은 이미지를 linear clasifier로 잘 분류할 수 있도록 (hidden) feature vector를 잘 representation 해주는 것이죠.&quot;&lt;/span&gt;&lt;/b&gt;&lt;/span&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;지금까지 설명한 내용을 종합하자면 결국 아래와 같이 정리해 볼 수 있겠네요.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;결국 딥러닝 모델 성능은 &quot;특정 task에 해당하는 최종 feature(=원본 데이터(=information)에서 가공된 데이터)들을 얼마나 잘 representation 해주느냐&quot;에 따라 달려있다.&amp;nbsp;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;보통, &lt;b&gt;feature learning, feature representation learning, representation learning&lt;/b&gt; 이라는 단어들이 자주 혼용되서 사용되는데, 위에서 언급한 바를 생각해보시면 어떠한 이유로 해당 용어를 혼용해서 사용했는지 조금은 파악하실 수 있으실 거라 생각됩니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;[Q. Softmax가 왜 linear classifier인가요?]&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;위의 질문에 답을 하기전에 먼저 아래의 질문에 답을 해보도록 하겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;'Q. logistic regression'이 어떻게 linear classifier가 될 수 있는가?'&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;우리는 &lt;b&gt;logistic regression&lt;/b&gt;을 &lt;b&gt;아래 그림&lt;/b&gt;으로 나타낼 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;552&quot; data-origin-height=&quot;482&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/beFJLu/btrrisoXh0S/OtLW7d7bekYvJS9815nNdk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/beFJLu/btrrisoXh0S/OtLW7d7bekYvJS9815nNdk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/beFJLu/btrrisoXh0S/OtLW7d7bekYvJS9815nNdk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbeFJLu%2FbtrrisoXh0S%2FOtLW7d7bekYvJS9815nNdk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;552&quot; height=&quot;482&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;552&quot; data-origin-height=&quot;482&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;우리에게 X라는 input 데이터가 주어 졌을 때, Y가 0 또는 1을 구분 짓는 binary classification을 한다고 가정해보겠습니다. X가 주어졌을 때 Y가 0이 될 확률을 &quot;P(Y=0|X)&quot;와 같이 표현할 수 있고, 이것을 sigmoid function을 이용해 구체적인 (확률) 값으로 표현한다면 아래 이미지의 수식을 이용하게 됩니다. 또한, 'binary clasfication'이기 때문에 P(Y=1|X)인 경우의 확률 값을 &quot;1-P(Y=0|X)&quot;라고 표현 할 수 있게 되죠.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;509&quot; data-origin-height=&quot;205&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dkJdb3/btrriqrbKLx/XpGWjK8jy70uNNqMWec0b1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dkJdb3/btrriqrbKLx/XpGWjK8jy70uNNqMWec0b1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dkJdb3/btrriqrbKLx/XpGWjK8jy70uNNqMWec0b1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdkJdb3%2FbtrriqrbKLx%2FXpGWjK8jy70uNNqMWec0b1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;509&quot; height=&quot;205&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;509&quot; data-origin-height=&quot;205&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;Binary classification 문제는 P(Y=1|X), P(Y=0|X) 둘 중 어느 확률이 더 큰지에 따라서 최종 output 값(=Y)을 1로 classification 할 것인지, 0으로 classification 할 것인지 결정합니다. 그리고 이것을 아래와 같이 정리해볼 수 있죠.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;348&quot; data-origin-height=&quot;386&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bT4QKt/btrrhS9rWxR/1gdGnhwAdBKGrXKqg8kLbk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bT4QKt/btrrhS9rWxR/1gdGnhwAdBKGrXKqg8kLbk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bT4QKt/btrrhS9rWxR/1gdGnhwAdBKGrXKqg8kLbk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbT4QKt%2FbtrrhS9rWxR%2F1gdGnhwAdBKGrXKqg8kLbk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;417&quot; height=&quot;463&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;348&quot; data-origin-height=&quot;386&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;위의 수식을 통해 우리는 아래와 같이 정리해볼 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;logistic regression이 linear method라고 불리는 이유는 linear한 계산을 통해 풀 수 있는 문제여서가 아니라, &lt;span style=&quot;color: #ee2323;&quot;&gt;logistic regression을 결정짓는 decision boundary가 여전히 linear하기 때문인 것&lt;/span&gt;이다.&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;(&amp;darr;&amp;darr;&amp;darr; logistic classifier가 linear classfier라는 설명을 참고한 링크 &amp;darr;&amp;darr;&amp;darr;)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://homes.cs.washington.edu/~marcotcr/blog/linear-classifiers/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://homes.cs.washington.edu/~marcotcr/blog/linear-classifiers/&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1642659004758&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;website&quot; data-og-title=&quot;Is Logistic Regression a linear classifier?&quot; data-og-description=&quot;A linear classifier is one where a hyperplane is formed by taking a linear combination of the features, such that one 'side' of the hyperplane predicts one class and the other 'side' predicts the other.&quot; data-og-host=&quot;homes.cs.washington.edu&quot; data-og-source-url=&quot;https://homes.cs.washington.edu/~marcotcr/blog/linear-classifiers/&quot; data-og-url=&quot;https://homes.cs.washington.edu/~marcotcr/blog/linear-classifiers/&quot; data-og-image=&quot;&quot;&gt;&lt;a href=&quot;https://homes.cs.washington.edu/~marcotcr/blog/linear-classifiers/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://homes.cs.washington.edu/~marcotcr/blog/linear-classifiers/&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url();&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;Is Logistic Regression a linear classifier?&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;A linear classifier is one where a hyperplane is formed by taking a linear combination of the features, such that one 'side' of the hyperplane predicts one class and the other 'side' predicts the other.&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;homes.cs.washington.edu&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;Softmax를 이용한 classification도 똑같은 맥락에서 linear classifier라고 할 수 있습니다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;1049&quot; data-origin-height=&quot;466&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/J5cxf/btrrfnPBP0z/1F9K2JKqSBXDebwYu0vZMk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/J5cxf/btrrfnPBP0z/1F9K2JKqSBXDebwYu0vZMk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/J5cxf/btrrfnPBP0z/1F9K2JKqSBXDebwYu0vZMk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FJ5cxf%2FbtrrfnPBP0z%2F1F9K2JKqSBXDebwYu0vZMk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1049&quot; height=&quot;466&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;1049&quot; data-origin-height=&quot;466&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;우리가 왜 &lt;b&gt;softmax&lt;/b&gt;를 &lt;b&gt;'multinomial logistic regression'&lt;/b&gt;으로 부르는지 이해해보면 좀 더 구체적으로 이해하실 수 있으실거라 생각됩니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;Softmax is known as multinomial logistic regression&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;(&amp;darr;&amp;darr;&amp;darr;multinomial logistic regression == softmax&amp;darr;&amp;darr;&amp;darr;)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://srdas.github.io/DLBook/LinearLearningModels.html#multiclass-logistic-regression&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://srdas.github.io/DLBook/LinearLearningModels.html#multiclass-logistic-regression&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1642659967992&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;book&quot; data-og-title=&quot;Deep Learning&quot; data-og-description=&quot;This is an introduction to deep learning.&quot; data-og-host=&quot;srdas.github.io&quot; data-og-source-url=&quot;https://srdas.github.io/DLBook/LinearLearningModels.html#multiclass-logistic-regression&quot; data-og-url=&quot;https://srdas.github.io/DLBook/LinearLearningModels.html#multiclass-logistic-regression&quot; data-og-image=&quot;&quot;&gt;&lt;a href=&quot;https://srdas.github.io/DLBook/LinearLearningModels.html#multiclass-logistic-regression&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://srdas.github.io/DLBook/LinearLearningModels.html#multiclass-logistic-regression&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url();&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;Deep Learning&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;This is an introduction to deep learning.&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;srdas.github.io&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;background-color: #edebe9; color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;&lt;span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;background-color: #edebe9; color: #000000;&quot;&gt;​&lt;/span&gt;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;3. Unsupervised learning 관점에서 representation (learning) 이란?&lt;/h3&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;앞서 설명한&amp;nbsp; supervised learning 기반의 representation (learning)에 대한 연구도 이루어지고 있지만, &lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;오늘날 주로 연구되는 분야는 unsupervied learning 방식으로 하는 representation learning&lt;/b&gt; &lt;/span&gt;입니다.&lt;b&gt; Deeplearning book의 &quot; Chapter 15. Representation Learning&quot;&lt;/b&gt; 역시 &lt;b&gt;대부분 unsupervised learning 방식의 representation learning&lt;/b&gt;에 대해서 이야기하고 있습니다. 이에 대한 자세한 설명은 다음 포스팅에서 하도록 하겠고, 여기에서는 추상적인 이유에 대해서만 말씀드리도록 하겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;앞서 supervised learning 관점에서 representation을 해석해 봤다면, 이번에는 &lt;b&gt;unsupervised learning 관점에서 representation이 &lt;/b&gt;(supervised learning 관점에서의 representation과)&lt;b&gt; 어떻게 다르게 설명&lt;/b&gt;될 수 있는지 살펴보겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;Unsupervised learning은 쉽게 말해 unlabelded dataset을 이용한 학습 방식입니다. 물론, &lt;b&gt;unsupervised learning&lt;/b&gt;도 &lt;b&gt;objective(=loss) function&lt;/b&gt;이 있기 때문에&lt;b&gt; 그 나름대로의 representation을 학습&lt;/b&gt;하게 됩니다. &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;DFN1.png&quot; data-origin-width=&quot;704&quot; data-origin-height=&quot;396&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/CVlko/btrrcOmzitX/MkV9JPEzmHjD8gBNVyYwIK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/CVlko/btrrcOmzitX/MkV9JPEzmHjD8gBNVyYwIK/img.png&quot; data-alt=&quot;이미지 출처:https://srdas.github.io/DLBook/NNDeepLearning.html&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/CVlko/btrrcOmzitX/MkV9JPEzmHjD8gBNVyYwIK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FCVlko%2FbtrrcOmzitX%2FMkV9JPEzmHjD8gBNVyYwIK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;704&quot; height=&quot;396&quot; data-filename=&quot;DFN1.png&quot; data-origin-width=&quot;704&quot; data-origin-height=&quot;396&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;이미지 출처:https://srdas.github.io/DLBook/NNDeepLearning.html&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;Unsupervised learning&lt;/b&gt;에서도&lt;b&gt; new representation에 해당하는 hidden feature vector&lt;/b&gt;를 구할 수 있습니다. 그런데, 우리는 이러한 representation을 &lt;b&gt;&lt;span style=&quot;color: #0593d3;&quot;&gt;명시적(explicit)&lt;/span&gt;&lt;/b&gt;으로 설정(=design)할 수 있습니다. 즉, &lt;b&gt;hidden feature vector&lt;/b&gt;들이 어떠한 &lt;b&gt;&lt;span style=&quot;color: #0593d3;&quot;&gt;density function(&amp;rarr; 뒤에서 더 자세히 설명)&lt;/span&gt;&lt;/b&gt;을 따를거라고 &lt;b&gt;명시적으로 표현(=representation)&lt;/b&gt;해 줄 수 있죠 (물론 이론이 아닌 '현실세계'에서는 굉장히 어려운 문제이지만요!).&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;예를 들어, &lt;b&gt;VAE (=Variational Auto-Encoder)&lt;/b&gt; 같은 경우에도 &lt;b&gt;hidden feature vector (= latent vector)&lt;/b&gt;가 &lt;b&gt;normal distribution을 따를 것&lt;/b&gt;이라고 &lt;b&gt;명시적&lt;/b&gt;으로 &lt;b&gt;가정&lt;/b&gt;합니다. &lt;span&gt;이러한 내용을&lt;/span&gt; Deeplearning book에서는 일반화하여 아래와 같이 표현합니다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;Other kinds of representation learning algorithms are often explicitly designed to shape the representation in some particular way. (-&amp;gt;VAE같은 representation learning 알고리즘은 &lt;span&gt;representation에 해당하는 feature vector가 명시적(=explicit)으로 normal distribution을 따르겠금(=shape) 고안된다)&lt;/span&gt;&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;vae-diagram-1-1024x563.jpg&quot; data-origin-width=&quot;1024&quot; data-origin-height=&quot;563&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bnNwst/btrrhSuXbU3/qNCMk4llPsSvQGSF1qtrk0/img.jpg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bnNwst/btrrhSuXbU3/qNCMk4llPsSvQGSF1qtrk0/img.jpg&quot; data-alt=&quot;이미지 출처:&amp;amp;amp;amp;amp;amp;amp;amp;amp;nbsp;https://learnopencv.com/variational-autoencoder-in-tensorflow/&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bnNwst/btrrhSuXbU3/qNCMk4llPsSvQGSF1qtrk0/img.jpg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbnNwst%2FbtrrhSuXbU3%2FqNCMk4llPsSvQGSF1qtrk0%2Fimg.jpg&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;702&quot; height=&quot;386&quot; data-filename=&quot;vae-diagram-1-1024x563.jpg&quot; data-origin-width=&quot;1024&quot; data-origin-height=&quot;563&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;이미지 출처:&amp;amp;amp;amp;amp;amp;amp;amp;nbsp;https://learnopencv.com/variational-autoencoder-in-tensorflow/&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;앞서 설명한 개념들 중에 &lt;b&gt;density estimation&lt;/b&gt;과&lt;b&gt; explicit&lt;/b&gt; 이란 단어를 좀 더 설명해보도록 하겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Q. Density estimation이 뭔가요?&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;여행을 하다가 지역 주민에게 아래와 같은 질문을 했다고 가정해보겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;i&gt;&lt;b&gt;&quot;여기 육교 아래로 지나가는 차는 하루에 몇 대나 인가요?&quot;&lt;/b&gt;&lt;/i&gt;​&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;EVTVEs2UUAAlIbi.jpeg&quot; data-origin-width=&quot;2048&quot; data-origin-height=&quot;1366&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/oSJHA/btrrdYbvQk1/ab7RW6qlVzu2NiMHqhHuO0/img.jpg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/oSJHA/btrrdYbvQk1/ab7RW6qlVzu2NiMHqhHuO0/img.jpg&quot; data-alt=&quot;그림출처:https://twitter.com/cjndrama/status/1248857284177874944&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/oSJHA/btrrdYbvQk1/ab7RW6qlVzu2NiMHqhHuO0/img.jpg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FoSJHA%2FbtrrdYbvQk1%2Fab7RW6qlVzu2NiMHqhHuO0%2Fimg.jpg&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;416&quot; height=&quot;277&quot; data-filename=&quot;EVTVEs2UUAAlIbi.jpeg&quot; data-origin-width=&quot;2048&quot; data-origin-height=&quot;1366&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;그림출처:https://twitter.com/cjndrama/status/1248857284177874944&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;위의 질문에 대한 지역 주민들의 답은 아래와 같이 굉장히 다양할 것입니다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;주민1: 어제 봤더니 300대 인거 같던데?&lt;/li&gt;
&lt;li&gt;주민2: 일주일 전에 봤더니 500대 인거 같던데?&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;위의 답을 통해 우리는 아래와 같은 사실들을 알 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li data-charcodes=&quot;232&quot; data-font=&quot;Wingdings,Sans-Serif&quot; data-buautonum=&quot;8&quot; data-margin=&quot;450&quot; data-aria-posinset=&quot;1&quot; data-aria-level=&quot;1&quot;&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;&lt;span&gt;차가 하루에 몇 대가 지나갈지는 정해지지 않음&amp;nbsp;&lt;/span&gt;&lt;/span&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li data-charcodes=&quot;232&quot; data-font=&quot;Wingdings,Sans-Serif&quot; data-buautonum=&quot;8&quot; data-margin=&quot;450&quot; data-aria-posinset=&quot;1&quot; data-aria-level=&quot;1&quot;&gt;&lt;span&gt;즉, 차가 하루에 100 대 지나간다는 건 매일 어김없이 100대가 지나간다는 '절대진리'와 같음 ​&lt;/span&gt;&lt;/li&gt;
&lt;li data-charcodes=&quot;232&quot; data-font=&quot;Wingdings,Sans-Serif&quot; data-buautonum=&quot;8&quot; data-margin=&quot;450&quot; data-aria-posinset=&quot;3&quot; data-aria-level=&quot;1&quot;&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;&lt;span&gt;어떤 질문에 대해 상수&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;&lt;span&gt;로 답변하는 것은 &lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;&lt;span&gt;절대 진리 이야기 하는 것과&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;&lt;span&gt;&amp;nbsp;같음&lt;/span&gt;&lt;/span&gt;&lt;span&gt;​&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li data-charcodes=&quot;232&quot; data-font=&quot;Wingdings,Sans-Serif&quot; data-buautonum=&quot;8&quot; data-margin=&quot;450&quot; data-aria-posinset=&quot;2&quot; data-aria-level=&quot;1&quot;&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;&lt;span&gt;&lt;span&gt;그러므로, &lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;&lt;span&gt;'하루(&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;&lt;span&gt;일일) 교통량&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;&lt;span&gt;'&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;&lt;span&gt;을 상수 개념이 아닌&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;&lt;span&gt;변수&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;&lt;span&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;개념으로 봐야 함&lt;/span&gt;&lt;/span&gt;&lt;span&gt;​&lt;/span&gt;&lt;/li&gt;
&lt;li data-charcodes=&quot;232&quot; data-font=&quot;Wingdings,Sans-Serif&quot; data-buautonum=&quot;8&quot; data-margin=&quot;450&quot; data-aria-posinset=&quot;3&quot; data-aria-level=&quot;1&quot;&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;&amp;lsquo;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;(&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;사회&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;&amp;nbsp;or&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;자연&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;)&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;현상&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;&amp;rsquo;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;과 관련된 개념들은 '변수' 개념들로 봐야하는 경우가 대부분&lt;/span&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li data-charcodes=&quot;232&quot; data-font=&quot;Wingdings,Sans-Serif&quot; data-buautonum=&quot;8&quot; data-margin=&quot;450&quot; data-aria-posinset=&quot;3&quot; data-aria-level=&quot;1&quot;&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;위와 같은 '변수' 성격을 띄는 '(사회 or 자연)현상'과 같은 질문들에 대해, 그럴듯한 답을 하기 위해선 &amp;lsquo;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;확률&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;&amp;rsquo;이 필요&lt;/span&gt;&lt;/li&gt;
&lt;li data-charcodes=&quot;232&quot; data-font=&quot;Wingdings,Sans-Serif&quot; data-buautonum=&quot;8&quot; data-margin=&quot;450&quot; data-aria-posinset=&quot;3&quot; data-aria-level=&quot;1&quot;&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;&lt;span style=&quot;letter-spacing: 0px;&quot;&gt;즉, 다수의 관측 결과를 기반으로 어떠한 현상을 확률적으로 설명해야 함&lt;/span&gt;&lt;/b&gt;&lt;/span&gt;&lt;/li&gt;
&lt;li data-charcodes=&quot;232&quot; data-font=&quot;Wingdings,Sans-Serif&quot; data-buautonum=&quot;8&quot; data-margin=&quot;450&quot; data-aria-posinset=&quot;3&quot; data-aria-level=&quot;1&quot;&gt;&lt;span style=&quot;letter-spacing: 0px;&quot;&gt;이 때, 필요한 개념이 &lt;b&gt;'​Probability (Density) Distribution​'&lt;/b&gt;&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;505&quot; data-origin-height=&quot;214&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bY4ulK/btrriqEYrlG/sXcig98Fmoz7NzgWyYe20K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bY4ulK/btrriqEYrlG/sXcig98Fmoz7NzgWyYe20K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bY4ulK/btrriqEYrlG/sXcig98Fmoz7NzgWyYe20K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbY4ulK%2FbtrriqEYrlG%2FsXcig98Fmoz7NzgWyYe20K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;505&quot; height=&quot;214&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;505&quot; data-origin-height=&quot;214&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;603&quot; data-origin-height=&quot;223&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/JEIEf/btrrisbIldV/cLRYfNk9pl4ntsiJhdRmEk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/JEIEf/btrrisbIldV/cLRYfNk9pl4ntsiJhdRmEk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/JEIEf/btrrisbIldV/cLRYfNk9pl4ntsiJhdRmEk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FJEIEf%2FbtrrisbIldV%2FcLRYfNk9pl4ntsiJhdRmEk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;541&quot; height=&quot;200&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;603&quot; data-origin-height=&quot;223&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;(&amp;darr;&amp;darr;&amp;darr; 스마트폰 수명과 exponential probability distribution 간의 관계&amp;darr;&amp;darr;&amp;darr;)&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;a href=&quot;https://math100.tistory.com/36&quot;&gt;https://math100.tistory.com/36&lt;/a&gt;​&amp;nbsp;&lt;/p&gt;
&lt;figure id=&quot;og_1642665892117&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;지수분포 문제풀이&quot; data-og-description=&quot;이전 글에서 지수분포는 시간이 지날수록 확률이 점점 작아지는 경우에 사용한다고 했었는데, 지수분포는 &amp;ldquo;이하일 확률&amp;rdquo;과 &amp;ldquo;이상일 확률&amp;rdquo;을 구하는 공식이 서로 다르다. 그래서 문제를 풀 &quot; data-og-host=&quot;math100.tistory.com&quot; data-og-source-url=&quot;https://math100.tistory.com/36&quot; data-og-url=&quot;https://math100.tistory.com/36&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/eYeYV/hyM9l9YPgK/UsSSneGjxXaXjUYl6Pw3Q0/img.png?width=410&amp;amp;height=410&amp;amp;face=0_0_410_410,https://scrap.kakaocdn.net/dn/cBK42C/hyM9k4iBew/yIjBLV2AZGpogkjUnuY8S0/img.png?width=410&amp;amp;height=410&amp;amp;face=0_0_410_410,https://scrap.kakaocdn.net/dn/yEKes/hyM9ulxXFa/GP6K2bD2t1oEWg3ykYpBCk/img.png?width=724&amp;amp;height=634&amp;amp;face=0_0_724_634&quot;&gt;&lt;a href=&quot;https://math100.tistory.com/36&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://math100.tistory.com/36&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/eYeYV/hyM9l9YPgK/UsSSneGjxXaXjUYl6Pw3Q0/img.png?width=410&amp;amp;height=410&amp;amp;face=0_0_410_410,https://scrap.kakaocdn.net/dn/cBK42C/hyM9k4iBew/yIjBLV2AZGpogkjUnuY8S0/img.png?width=410&amp;amp;height=410&amp;amp;face=0_0_410_410,https://scrap.kakaocdn.net/dn/yEKes/hyM9ulxXFa/GP6K2bD2t1oEWg3ykYpBCk/img.png?width=724&amp;amp;height=634&amp;amp;face=0_0_724_634');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;지수분포 문제풀이&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;이전 글에서 지수분포는 시간이 지날수록 확률이 점점 작아지는 경우에 사용한다고 했었는데, 지수분포는 &amp;ldquo;이하일 확률&amp;rdquo;과 &amp;ldquo;이상일 확률&amp;rdquo;을 구하는 공식이 서로 다르다. 그래서 문제를 풀&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;math100.tistory.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;위와 같이 다양한 현상들을 확률적으로 표현해주는 probability distribution이 존재합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;755&quot; data-origin-height=&quot;545&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/HpUPf/btrrh5181Jj/MVUqlORgsKeCQNYqHLVKmk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/HpUPf/btrrh5181Jj/MVUqlORgsKeCQNYqHLVKmk/img.png&quot; data-alt=&quot;이미지 출처:&amp;amp;amp;amp;amp;amp;amp;amp;nbsp;https://destrudo.tistory.com/16&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/HpUPf/btrrh5181Jj/MVUqlORgsKeCQNYqHLVKmk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FHpUPf%2Fbtrrh5181Jj%2FMVUqlORgsKeCQNYqHLVKmk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;755&quot; height=&quot;545&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;755&quot; data-origin-height=&quot;545&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;이미지 출처:&amp;amp;amp;amp;amp;amp;amp;nbsp;https://destrudo.tistory.com/16&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;앞에 설명한 내용을&amp;nbsp; 다시 정리해보겠습니다.&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;관측 데이터가 한달, 두 달, 1년 넘게 쌓이게 되면 우리는 '일일 교통량'이란 변수가 어떤 (확률분포) 특성을 갖는지 좀더 정확히 파악할 수 있게 됩니다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;우리는 여기서 '변수'와 '데이터(=관측 값)'에 대한 관계를 정의해볼 수 있습니다. &lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;&lt;span&gt;어떤 변수가 가질 수 있는 다양한 가능성 중의 하나가 현실 세계에 구체화된&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;&lt;span&gt;값을 데이터라고 부릅니다. 즉, 데이터는 변수의 일면에 불과한 것이죠.&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot; data-usefontface=&quot;true&quot; data-contrast=&quot;none&quot;&gt;&lt;span&gt;&amp;ldquo;우리는 이렇게 관측된 데이터들을 통해 그 변수(random variable)가 가지고 있는 본질적인 특성을 probability density distribution으로 설명 또는 추정(estimate)하려고 하는데, 이를 'density estimation (밀도추정)' 이라고 합니다&amp;rdquo;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;span style=&quot;background-color: #edebe9; color: #000000;&quot; data-usefontface=&quot;false&quot; data-contrast=&quot;none&quot;&gt;&lt;span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;background-color: #edebe9; color: #000000;&quot;&gt;​&lt;/span&gt;&lt;/p&gt;
&lt;div&gt;&amp;nbsp;&lt;/div&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;(&amp;darr;&amp;darr;&amp;darr; Density estimation에 대한 글을 참고한 사이트 &amp;darr;&amp;darr;&amp;darr;)&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;a href=&quot;https://darkpgmr.tistory.com/147&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://darkpgmr.tistory.com/147&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1642752867303&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;Kernel Density Estimation(커널밀도추정)에 대한 이해&quot; data-og-description=&quot;얼마전 한 친구가 KDE라는 용어를 사용하기에 KDE가 뭐냐고 물어보니 Kernel Density Estimation이라 한다. 순간, Kernel Density Estimation이 뭐지? 하는 의구심이 생겨서 그 친구에게 물어보니 자기도 잘 모른.&quot; data-og-host=&quot;darkpgmr.tistory.com&quot; data-og-source-url=&quot;https://darkpgmr.tistory.com/147&quot; data-og-url=&quot;https://darkpgmr.tistory.com/147&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/RKH3k/hyNaxPcHv0/K2Lnmt8azge1r5d8gUsiVK/img.png?width=550&amp;amp;height=216&amp;amp;face=0_0_550_216,https://scrap.kakaocdn.net/dn/blNkOM/hyM9vlabqC/CFGb7FZ3gTQZgP3tHzcv80/img.png?width=550&amp;amp;height=216&amp;amp;face=0_0_550_216,https://scrap.kakaocdn.net/dn/brcXhx/hyM9oGlUXV/uK9Ld1Bfr7Gm0Z5ylkYfR0/img.png?width=480&amp;amp;height=480&amp;amp;face=0_0_480_480&quot;&gt;&lt;a href=&quot;https://darkpgmr.tistory.com/147&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://darkpgmr.tistory.com/147&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/RKH3k/hyNaxPcHv0/K2Lnmt8azge1r5d8gUsiVK/img.png?width=550&amp;amp;height=216&amp;amp;face=0_0_550_216,https://scrap.kakaocdn.net/dn/blNkOM/hyM9vlabqC/CFGb7FZ3gTQZgP3tHzcv80/img.png?width=550&amp;amp;height=216&amp;amp;face=0_0_550_216,https://scrap.kakaocdn.net/dn/brcXhx/hyM9oGlUXV/uK9Ld1Bfr7Gm0Z5ylkYfR0/img.png?width=480&amp;amp;height=480&amp;amp;face=0_0_480_480');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;Kernel Density Estimation(커널밀도추정)에 대한 이해&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;얼마전 한 친구가 KDE라는 용어를 사용하기에 KDE가 뭐냐고 물어보니 Kernel Density Estimation이라 한다. 순간, Kernel Density Estimation이 뭐지? 하는 의구심이 생겨서 그 친구에게 물어보니 자기도 잘 모른.&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;darkpgmr.tistory.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;Q. Density estimation과 explicit은 어떤 관계인가요?&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;앞서 배운&lt;b&gt; density estimation&lt;/b&gt;은 &lt;b&gt;두&lt;/b&gt; 가지 &lt;b&gt;종류&lt;/b&gt;로 나뉩니다.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;1008&quot; data-origin-height=&quot;272&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bl3pIX/btrrhF3JPAH/B77GXvkXtUS0dOUm77bTtK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bl3pIX/btrrhF3JPAH/B77GXvkXtUS0dOUm77bTtK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bl3pIX/btrrhF3JPAH/B77GXvkXtUS0dOUm77bTtK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fbl3pIX%2FbtrrhF3JPAH%2FB77GXvkXtUS0dOUm77bTtK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;601&quot; height=&quot;162&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;1008&quot; data-origin-height=&quot;272&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(이 글에서는 &lt;b&gt;explicit&lt;/b&gt;에 대한 개념을 &lt;b&gt;설명&lt;/b&gt;하는 것이 &lt;b&gt;목적&lt;/b&gt;이므로 &lt;b&gt;parametric density estimation&lt;/b&gt;에 대해서만 &lt;b&gt;설명&lt;/b&gt;하겠습니다)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;Parametric density estimation은 변수(=random variable)이 특정 PDF(=Probability Density Function)를 따를 것이라 가정합니다.&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;즉, &lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;PDF에 대한 확률모델 (=probability density distribution)을 정해 놓고 모델의 파라미터만 추정&lt;/b&gt;&lt;/span&gt;하는 방식이죠. &lt;span style=&quot;color: #409d00;&quot;&gt;(참고로 여기서 말하는 parameter는 딥러닝에서 말하는 weight와 다릅니다!) &lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;앞서 예를 들었던, &lt;b&gt;VAE&lt;/b&gt;를 통해서 &lt;b&gt;설명&lt;/b&gt;하자면 다음과 같습니다.&lt;/span&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Latent vector (Z space)가 normal distribution을 따를 것이라 가정 &amp;larr; a.k.a prior distribution​&lt;/li&gt;
&lt;li&gt;Z latent vector의 mean, std만 구하면 Z space를 설명할 수 있음​&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;607&quot; data-origin-height=&quot;403&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bwZPhS/btrriVrey7k/zrd98YkWFCU4jZk5O9K651/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bwZPhS/btrriVrey7k/zrd98YkWFCU4jZk5O9K651/img.png&quot; data-alt=&quot;이미지 출처:&amp;amp;amp;amp;amp;amp;amp;amp;nbsp;https://davideliu.com/2019/11/08/variational-autoencoder/&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bwZPhS/btrriVrey7k/zrd98YkWFCU4jZk5O9K651/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbwZPhS%2FbtrriVrey7k%2Fzrd98YkWFCU4jZk5O9K651%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;497&quot; height=&quot;330&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;607&quot; data-origin-height=&quot;403&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;이미지 출처:&amp;amp;amp;amp;amp;amp;amp;nbsp;https://davideliu.com/2019/11/08/variational-autoencoder/&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;Screen-Shot-2018-06-20-at-2.48.42-PM.png&quot; data-origin-width=&quot;1091&quot; data-origin-height=&quot;506&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/N3isu/btrrcHVJQLI/Gz9hcKMxjL9dj6sze1PerK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/N3isu/btrrcHVJQLI/Gz9hcKMxjL9dj6sze1PerK/img.png&quot; data-alt=&quot;이미지 출처:https://www.jeremyjordan.me/variational-autoencoders/&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/N3isu/btrrcHVJQLI/Gz9hcKMxjL9dj6sze1PerK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FN3isu%2FbtrrcHVJQLI%2FGz9hcKMxjL9dj6sze1PerK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1091&quot; height=&quot;506&quot; data-filename=&quot;Screen-Shot-2018-06-20-at-2.48.42-PM.png&quot; data-origin-width=&quot;1091&quot; data-origin-height=&quot;506&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;이미지 출처:https://www.jeremyjordan.me/variational-autoencoders/&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;(&amp;darr;&amp;darr;&amp;darr; VAE에 대한 설명을 참조한 글 &amp;darr;&amp;darr;&amp;darr;)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://www.jeremyjordan.me/variational-autoencoders/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.jeremyjordan.me/variational-autoencoders/&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1642666953064&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;Variational autoencoders.&quot; data-og-description=&quot;In my introductory post on autoencoders, I discussed various models (undercomplete, sparse, denoising, contractive) which take data as input and discover some latent state representation of that data. More specifically, our input data is converted into an &quot; data-og-host=&quot;www.jeremyjordan.me&quot; data-og-source-url=&quot;https://www.jeremyjordan.me/variational-autoencoders/&quot; data-og-url=&quot;https://www.jeremyjordan.me/variational-autoencoders/&quot; data-og-image=&quot;&quot;&gt;&lt;a href=&quot;https://www.jeremyjordan.me/variational-autoencoders/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://www.jeremyjordan.me/variational-autoencoders/&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url();&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;Variational autoencoders.&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;In my introductory post on autoencoders, I discussed various models (undercomplete, sparse, denoising, contractive) which take data as input and discover some latent state representation of that data. More specifically, our input data is converted into an&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;www.jeremyjordan.me&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;CS231&lt;/b&gt; 수업에서도 &lt;b&gt;generative model&lt;/b&gt;을 아래와 같이 구분하고 있는데,&lt;b&gt; VAE&lt;/b&gt;와 같은 모델을 &lt;b&gt;explicit density estimation&lt;/b&gt;으로 보고 있고, &lt;span style=&quot;color: #0593d3;&quot;&gt;&lt;b&gt;이것을 통해 'parametric=explicit' 관계로 해석&lt;/b&gt;&lt;/span&gt;했습니다 (&lt;b&gt;이와 같은 해석은 저의 주관적 해석&lt;/b&gt;이니 다른 의견이 있으신 분들은 댓글 달아주시면 감사하겠습니다!).&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;812&quot; data-origin-height=&quot;491&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/6RZKt/btrrdnbHqod/cZEwOP7RApMwTkLyFuyVwk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/6RZKt/btrrdnbHqod/cZEwOP7RApMwTkLyFuyVwk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/6RZKt/btrrdnbHqod/cZEwOP7RApMwTkLyFuyVwk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F6RZKt%2FbtrrdnbHqod%2FcZEwOP7RApMwTkLyFuyVwk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;812&quot; height=&quot;491&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;812&quot; data-origin-height=&quot;491&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;Deeplearning book&lt;/b&gt;에서는 &lt;b&gt;&quot;Supervised training of feedforward networks does not involve explicitly imposing any condition on the learned intermediate features.&quot;&lt;/b&gt;와 같은 표현을 합니다. 이 문장을 개인적으로 해석하면 아래와 같다고 생각합니다.&lt;/span&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;any condition: probability density function&lt;/li&gt;
&lt;li&gt;learned intermediate features: pre-trained model의 feature들 &amp;larr; DCGAN에서 pre-trained model의 feature들을 intermediate features라고 묘사하고 있음&lt;/li&gt;
&lt;li&gt;&lt;span&gt;does not involve explicitly imposing any condition on the learned intermediate features. &amp;larr; Supervised learning으로 이미 학습된 feature들은 어떤 PDF를 따를거라고 명시적(=explicit)으로 지정되지 않음&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;앞서 언급한 &lt;b&gt;latent vector들&lt;/b&gt;이 (dependent vector들을 포함하기 보다) &lt;b&gt;independent vector들을 더 많이 포함&lt;/b&gt;할 수 록 좀 더 &lt;b&gt;고차원 데이터를 쉽게 표현&lt;/b&gt;해 줄 수 있습니다. 예를 들어, 사람 얼굴을 4개의 latent vector들로 표현한다고 했을 때, &quot;x1=얼굴색, x2=인종, x3=머리색, x4=수염&quot; 이렇게 표현하면 x1과 x2는 어느 정도 종속관계 (dependent) 이기 때문에 (상대적으로) 표현력이 떨어지게 되지만, 반대로 &quot;x1=인종, x2=수염, x3=머리색, x4=안경여부&quot; 이렇게 표현한다면 더 많은 사람들의 얼굴을 잘 표현할 수 있을 것입니다.&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;Screen-Shot-2018-06-20-at-2.48.42-PM.png&quot; data-origin-width=&quot;1091&quot; data-origin-height=&quot;506&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/N3isu/btrrcHVJQLI/Gz9hcKMxjL9dj6sze1PerK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/N3isu/btrrcHVJQLI/Gz9hcKMxjL9dj6sze1PerK/img.png&quot; data-alt=&quot;이미지 출처:&amp;amp;amp;amp;amp;amp;amp;amp;nbsp;https://www.jeremyjordan.me/variational-autoencoders/&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/N3isu/btrrcHVJQLI/Gz9hcKMxjL9dj6sze1PerK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FN3isu%2FbtrrcHVJQLI%2FGz9hcKMxjL9dj6sze1PerK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1091&quot; height=&quot;506&quot; data-filename=&quot;Screen-Shot-2018-06-20-at-2.48.42-PM.png&quot; data-origin-width=&quot;1091&quot; data-origin-height=&quot;506&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;이미지 출처:&amp;amp;amp;amp;amp;amp;amp;nbsp;https://www.jeremyjordan.me/variational-autoencoders/&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;그래서, latent vector들이 최대한 independent 하도록 objective(=loss) function을 설계하는 것이 핵심 포인트가 됩니다.&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;4. Multi-task learning 그리고 shared internal representation&lt;/h3&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;우리에게 굉장히 &lt;b&gt;적은 데이터&lt;/b&gt;들이 주어졌을 때, &lt;b&gt;supervised learning&lt;/b&gt;을 적용한다면 흔히 &lt;b&gt;overfitting&lt;/b&gt; 문제가 발생할 것입니다. 예를 들어, 우리에게 아래와 같은 Chest X-Ray 폐렴 데이터만 갖고 있다고 해보겠습니다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;imaging-in-medicine-foreign-bodies-11-5-57-g004 (2).png&quot; data-origin-width=&quot;262&quot; data-origin-height=&quot;262&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/balyPd/btrrjB1WUQN/bR6gl3GrdZF6cvJfvN6qw1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/balyPd/btrrjB1WUQN/bR6gl3GrdZF6cvJfvN6qw1/img.png&quot; data-alt=&quot;이미지 출처:https://www.openaccessjournals.com/articles/advanced-neural-network-solution-for-detection-of-lung-pathology-and-foreign-body-on-chest-plain-radiographs-13104.html&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/balyPd/btrrjB1WUQN/bR6gl3GrdZF6cvJfvN6qw1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbalyPd%2FbtrrjB1WUQN%2FbR6gl3GrdZF6cvJfvN6qw1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;262&quot; height=&quot;262&quot; data-filename=&quot;imaging-in-medicine-foreign-bodies-11-5-57-g004 (2).png&quot; data-origin-width=&quot;262&quot; data-origin-height=&quot;262&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;이미지 출처:https://www.openaccessjournals.com/articles/advanced-neural-network-solution-for-detection-of-lung-pathology-and-foreign-body-on-chest-plain-radiographs-13104.html&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;imaging-in-medicine-foreign-bodies-11-5-57-g004 (3).png&quot; data-origin-width=&quot;264&quot; data-origin-height=&quot;261&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bEkvL5/btrrosCTXCP/HS6ulPKmMJO4nOvwYYx141/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bEkvL5/btrrosCTXCP/HS6ulPKmMJO4nOvwYYx141/img.png&quot; data-alt=&quot;그림 출처:https://www.openaccessjournals.com/articles/advanced-neural-network-solution-for-detection-of-lung-pathology-and-foreign-body-on-chest-plain-radiographs-13104.html&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bEkvL5/btrrosCTXCP/HS6ulPKmMJO4nOvwYYx141/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbEkvL5%2FbtrrosCTXCP%2FHS6ulPKmMJO4nOvwYYx141%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;258&quot; height=&quot;255&quot; data-filename=&quot;imaging-in-medicine-foreign-bodies-11-5-57-g004 (3).png&quot; data-origin-width=&quot;264&quot; data-origin-height=&quot;261&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;그림 출처:https://www.openaccessjournals.com/articles/advanced-neural-network-solution-for-detection-of-lung-pathology-and-foreign-body-on-chest-plain-radiographs-13104.html&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;의사&lt;/b&gt;들이 보는 &lt;b&gt;폐렴 증상&lt;/b&gt;은 분명 &lt;b&gt;폐주위의 섬유화&lt;/b&gt;에 주목하겠지만, &lt;b&gt;딥러닝&lt;/b&gt;의 경우는 &lt;b&gt;CAM(=Class Activation Map)&lt;/b&gt;을 통해 살펴보면 &lt;b&gt;엉뚱한데&lt;/b&gt;를 &lt;b&gt;주목&lt;/b&gt;하는 경향이 있습니다. 아래와 같이 엉뚱한 곳을 보는 &lt;b&gt;이유&lt;/b&gt;는 &lt;b&gt;폐렴 관련 데이터는 저런 부분들만 살펴봐도 분류가 가능했기 때문&lt;/b&gt;일 수 있습니다. 즉, 정답의 이유는 상관없이 &quot;학습 데이터의 정답만 맞추면 된다&quot;는 식인 것이죠.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;imaging-in-medicine-foreign-bodies-11-5-57-g004 (4).png&quot; data-origin-width=&quot;261&quot; data-origin-height=&quot;260&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/eoCQ2M/btrroXWIL6h/nIOHKLEUuZgo3YEMbtqWFk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/eoCQ2M/btrroXWIL6h/nIOHKLEUuZgo3YEMbtqWFk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/eoCQ2M/btrroXWIL6h/nIOHKLEUuZgo3YEMbtqWFk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FeoCQ2M%2FbtrroXWIL6h%2FnIOHKLEUuZgo3YEMbtqWFk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;261&quot; height=&quot;260&quot; data-filename=&quot;imaging-in-medicine-foreign-bodies-11-5-57-g004 (4).png&quot; data-origin-width=&quot;261&quot; data-origin-height=&quot;260&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;imaging-in-medicine-foreign-bodies-11-5-57-g004 (5).png&quot; data-origin-width=&quot;260&quot; data-origin-height=&quot;263&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/lUbSm/btrrn3pOTKW/z92oGEbzwaeFqJLM6Hvk4K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/lUbSm/btrrn3pOTKW/z92oGEbzwaeFqJLM6Hvk4K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/lUbSm/btrrn3pOTKW/z92oGEbzwaeFqJLM6Hvk4K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FlUbSm%2Fbtrrn3pOTKW%2Fz92oGEbzwaeFqJLM6Hvk4K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;260&quot; height=&quot;263&quot; data-filename=&quot;imaging-in-medicine-foreign-bodies-11-5-57-g004 (5).png&quot; data-origin-width=&quot;260&quot; data-origin-height=&quot;263&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;하지만, 저런 artifacts와 같은 요소들이 없는 폐렴 데이터가 들어오면 곧 바로 틀려버립니다. 즉, &lt;b&gt;supervsied learning&lt;/b&gt; 방식을 사용한 &lt;b&gt;CNN&lt;/b&gt;이 &lt;b&gt;적은 수 의 폐렴 데이터&lt;/b&gt;로 학습하게 되면, &lt;b&gt;폐렴 CXR 이미지를 제대로 representation 해줄 수 없게 된다&lt;/b&gt;고 이야기 할 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;또 다른 예시로는 &lt;b&gt;한 병원의 의료 이미지 데이터만 사용하여 학습&lt;/b&gt;시킨 &lt;b&gt;딥러닝 모델&lt;/b&gt;을 다른 병원의 의료 이미지 데이터로 테스트 하려고 하면 잘 안되는 경우가 있습니다. 여기서 생각해 볼 수 있는 부분은 해당 딥러닝이 &lt;b&gt;특정 병원&lt;/b&gt;에서 생산되는 기기의 &lt;b&gt;noise&lt;/b&gt;를 &lt;b&gt;학습&lt;/b&gt;해버렸다거나, 기기의&amp;nbsp;&lt;b&gt;contrast&lt;/b&gt;를 고유하게 &lt;b&gt;학습&lt;/b&gt;했을 경우 &lt;b&gt;다른 병원의 noise, contrast가 들어오면 기존 예측방식에 큰 혼란&lt;/b&gt;을 줄 수 있겠죠 (물론 data augmentation을 이용하여 이러한 문제를 조금 해결할 수 있겠으나, 우리가 모르는 해당 병원의 고유의 변수들을 모두 커버할 순 없겠죠).&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;결과적으로, 적은 수의 데이터를 이용하여 supervised learning을 적용하면 &lt;b&gt;unseen data (=training data에 포함되지 않은 데이터) 를 잘 예측하지 못하게 됩니다&lt;/b&gt;. 이를 보통 &lt;b&gt;sensitive 하게 반응&lt;/b&gt;한다고 하죠. 그렇다면, 우리는 &lt;b&gt;unseen data에도 robust하게 작동&lt;/b&gt;할 수 있는 &lt;b&gt;딥러닝 모델&lt;/b&gt;을 &lt;b&gt;만들어야&lt;/b&gt; 합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;만약 우리가 방대한 unlabeled data를 이용하여 이미지를 잘 representation 해 줄 수 있는 딥러닝 모델을 만든다면, 이러한 딥러닝 모델을 pretrained model로써 활용할 수 있을 겁니다. 즉, 최종 supervised task에 unlabeled data으로 학습시킨 pretrained model을 적용해도 좋은 &lt;b&gt;'일반화 성능'&lt;/b&gt;을 얻을 수 있게 되는 것이죠.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&quot;Training with supervised learning techniques on the labeled subset often &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;results in severe overﬁtting. Semi-supervised learning oﬀers the chance to resolve &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;this overﬁtting problem by also learning from the unlabeled data. Speciﬁcally, &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;we can learn good representations for the unlabeled data, and then use these &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;representations to solve the supervised learning task.&quot;&lt;/span&gt;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;또한 우리는 unlabeled data를 이용한&lt;b&gt; unsupervised learning 방식&lt;/b&gt;과 &lt;b&gt;supervised learning 방식&lt;/b&gt;을 &lt;b&gt;섞은 semi-supervised learning 방식&lt;/b&gt;을 이용할 수도 있습니다. &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;즉, unsupervised learning 방식의 internal representation와 supervised learning 방식의 internal representation을 공유(share)할 수도 있는 것이죠.&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;909&quot; data-origin-height=&quot;301&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bs3Cns/btrriVGunhA/Gueredu3tC0LkYLFj7zzv1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bs3Cns/btrriVGunhA/Gueredu3tC0LkYLFj7zzv1/img.png&quot; data-alt=&quot;출처: UnderstandandLeveragetheInternalRepresentationsof ConvolutionalNeuralNetworks (by Bolei Zhou; MIT)&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bs3Cns/btrriVGunhA/Gueredu3tC0LkYLFj7zzv1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fbs3Cns%2FbtrriVGunhA%2FGueredu3tC0LkYLFj7zzv1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;909&quot; height=&quot;301&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;909&quot; data-origin-height=&quot;301&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;출처: UnderstandandLeveragetheInternalRepresentationsof ConvolutionalNeuralNetworks (by Bolei Zhou; MIT)&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;(&amp;darr;&amp;darr;&amp;darr;internal representation 에 대한 개념 이해 시 참고한 사이트 &amp;darr;&amp;darr;&amp;darr;)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://aman.ai/cs231n/visualization/#visualizing-internal-representationsactivations&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://aman.ai/cs231n/visualization/#visualizing-internal-representationsactivations&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1642748405149&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;website&quot; data-og-title=&quot;Aman's AI Journal &amp;bull; CS231n &amp;bull; Visualizing and Understanding&quot; data-og-description=&quot;Review: Computer Vision Tasks We&amp;rsquo;ve talked about architectural design in the context of convolutional neural networks. We&amp;rsquo;ve primarily studied this within the purview of image classification. In the last couple of topics, we&amp;rsquo;ve been looking at the ot&quot; data-og-host=&quot;aman.ai&quot; data-og-source-url=&quot;https://aman.ai/cs231n/visualization/#visualizing-internal-representationsactivations&quot; data-og-url=&quot;https://aman.ai/cs231n/visualization/#visualizing-internal-representationsactivations&quot; data-og-image=&quot;&quot;&gt;&lt;a href=&quot;https://aman.ai/cs231n/visualization/#visualizing-internal-representationsactivations&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://aman.ai/cs231n/visualization/#visualizing-internal-representationsactivations&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url();&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;Aman's AI Journal &amp;bull; CS231n &amp;bull; Visualizing and Understanding&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;Review: Computer Vision Tasks We&amp;rsquo;ve talked about architectural design in the context of convolutional neural networks. We&amp;rsquo;ve primarily studied this within the purview of image classification. In the last couple of topics, we&amp;rsquo;ve been looking at the ot&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;aman.ai&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;Representation에 영향을 미치는 세 가지 요소는 아래와 같다고 할 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;Input data
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;input data를 적절히 preprocessing 해주는 것에 따라서 적절한 new representation(or hidden feature vector)을 구할 수 있음&lt;/li&gt;
&lt;li&gt;최근 data에 기반한 data-centric research가 많이 주목 받고 있음&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Model
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;보통 딥러닝 연구에서는 모델링 중심 (=model centric) 연구가 주(=main)를 이룸&lt;/li&gt;
&lt;li&gt;Deep Learning
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;DNN, CNN, ViT, etc ...&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Machine Learning
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;K-means, KNN (&lt;span style=&quot;background-color: #ffffff; color: #4d5156;&quot;&gt;K-Nearest Neighbor), etc ...&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Task Type
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Supervised Learning 방식&lt;/li&gt;
&lt;li&gt;Semi-supervised learning 방식&lt;/li&gt;
&lt;li&gt;Unsupervised Learning 방식
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;self-supervised learning
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;contrastive learning&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;GAN Inversion&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;1308&quot; data-origin-height=&quot;365&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dbhONH/btrraBuPegB/NpV2QFyNz4HL8LYsskrNH0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dbhONH/btrraBuPegB/NpV2QFyNz4HL8LYsskrNH0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dbhONH/btrraBuPegB/NpV2QFyNz4HL8LYsskrNH0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdbhONH%2FbtrraBuPegB%2FNpV2QFyNz4HL8LYsskrNH0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1308&quot; height=&quot;365&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;1308&quot; data-origin-height=&quot;365&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;지금까지 &quot;representation learning&quot;에 대한 개념을 살펴보았습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;다음글에서는 unsupervised learning 방식으로 학습시킨 representation model (=pretrained model) 을 어떻게 supervised task에 적용해서 사용했는지 그 사례에 대해서 살펴보도록 하겠습니다.&lt;/span&gt;&lt;/p&gt;</description>
      <category>Representation Learning</category>
      <category>feature learning</category>
      <category>feature representation learning</category>
      <category>internal representation</category>
      <category>linear classifier</category>
      <category>representation</category>
      <category>representation learning</category>
      <author>Do-Woo-Ner</author>
      <guid isPermaLink="true">https://89douner.tistory.com/339</guid>
      <comments>https://89douner.tistory.com/339#entry339comment</comments>
      <pubDate>Fri, 21 Jan 2022 17:17:52 +0900</pubDate>
    </item>
    <item>
      <title>1. MLOps 란? (Feat Auto ML)</title>
      <link>https://89douner.tistory.com/337</link>
      <description>&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;안녕하세요.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;이번 글에서는 &lt;b&gt;Machine Learning Operations&lt;/b&gt; 의 약자인 &lt;b&gt;MLOps&lt;/b&gt;라는 개념에 대해 다루어보려고 합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;글의 전개 순서는 아래와 같습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;&lt;b&gt;MLOps란?&amp;nbsp;&lt;/b&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;b&gt;Design&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Model development&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Operations&lt;/b&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;b&gt;머신러닝 or 딥러닝 연구자들이 MLOps에 관심 갖어야 하는 이유 (Feat. Model development)&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;AutoML&lt;/b&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;[Note]&lt;/b&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;보통 MLOps라고 표현은 하긴 하지만, 딥러닝 역시 ML에 포함되기 때문에 DLOps라는 개념으로도 이해하셔도 좋을 것 같습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;1. MLOps란?&amp;nbsp;&lt;/h3&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;Machine Learning (or Deep learning)&lt;/b&gt; 기반의 &lt;b&gt;프로젝트&lt;/b&gt;를 시작한다고 하면 &lt;b&gt;크게 세 가지 단계&lt;/b&gt;로 나눌 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;&lt;b&gt;Design&lt;/b&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;산업에서 요구하는 바를 기준으로 문제를 정의합니다.&lt;/li&gt;
&lt;li&gt;예를 들어, 어떠한 데이터가 필요한지, 어떠한 딥러닝 기술이 중요하게 적용되면 좋은지 등을 의논합니다.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Model Development&lt;/b&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;우리가 design한 실험을 수행하기 위한 개발단계입니다.&lt;/li&gt;
&lt;li&gt;산업에서 사용할 수 있도록 제품의 안정성을 충분히 검증(verifying) 해야 합니다.&lt;/li&gt;
&lt;li&gt;그러기 위해서는 굉장히 다양한 실험을 수행해야 하죠.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;b&gt;Operations&lt;/b&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;이렇게 개발한 모델을 최종 사용자에게 서비스 하기 위한 단계입니다.&lt;/li&gt;
&lt;li&gt;우리가 만든 모델을 어느 곳에 배포할 지 (ex: 웹, 핸드폰, 컴퓨터 등) 에 따라서 배포방식이 달라집니다.&lt;/li&gt;
&lt;li&gt;제공된 제품을 지속적으로 모니터링(관리)해야 합니다.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; width=&quot;620&quot; height=&quot;465&quot; data-origin-width=&quot;2732&quot; data-origin-height=&quot;2048&quot; data-filename=&quot;mlops-loop-en.jpg&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/buAirg/btrh5YKTcZs/SrvZCJsiFOkj3mlHKSaK91/img.jpg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/buAirg/btrh5YKTcZs/SrvZCJsiFOkj3mlHKSaK91/img.jpg&quot; data-alt=&quot;&amp;amp;amp;lt;출처: https://ml-ops.org/content/mlops-principles&amp;amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/buAirg/btrh5YKTcZs/SrvZCJsiFOkj3mlHKSaK91/img.jpg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbuAirg%2Fbtrh5YKTcZs%2FSrvZCJsiFOkj3mlHKSaK91%2Fimg.jpg&quot; width=&quot;620&quot; height=&quot;465&quot; data-origin-width=&quot;2732&quot; data-origin-height=&quot;2048&quot; data-filename=&quot;mlops-loop-en.jpg&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;출처: https://ml-ops.org/content/mlops-principles&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;보통 &lt;b&gt;business&lt;/b&gt;라는 &lt;b&gt;영역&lt;/b&gt;에 제한하여 &lt;b&gt;MLOps&lt;/b&gt;라는 개념을 사용하고는 있지만, &lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;제 개인적인 생각&lt;/b&gt;&lt;/span&gt;으로는 &lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;대학원 연구자&lt;/b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;들&amp;nbsp;또한&lt;/span&gt;&lt;b&gt; MLOps에 주목&lt;/b&gt;&lt;/span&gt;해야 한다고 생각합니다. 지금부터 그 이유에 대해 간단히 설명해보겠습니다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;1-1. Design (Feat. 계획)&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;병원&lt;/b&gt;에서 &lt;b&gt;의료인공지능&lt;/b&gt;을 하시는 분들의 &lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;가장 큰 장점은 design 단계가 훌륭&lt;/b&gt;&lt;/span&gt;하다는 점입니다.&amp;nbsp; 아무리 IT 업계에서 의료인공지능을 한다고 해도, 의사분들이 &quot;그건 의학적으로 의미가 없습니다&quot;라고 말한다면 모든 프로젝트가 물거품이 될 가능성이 높지요. &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;Model development, Operations 단계에서 실패하는 것 보다 Design 단계에서 실패할 때 더 많은 시간과 비용을 손해보게 되기 때문에, Design 설계는 ML 프로젝트에서 매우 중요한 일입니다. &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;병원&lt;/b&gt;에서 &lt;b&gt;딥러닝&lt;/b&gt;을 &lt;b&gt;연구&lt;/b&gt;하시는 분들이&lt;b&gt; 높은 IF&lt;/b&gt;를 갖는 저널에 논문을 내는 이유는 그만큼&lt;b&gt; design이 훌륭하기 때문&lt;/b&gt;입니다. 개인적으로 &quot;딥러닝이 사용되면 의사들의 역할이 없어 질 것이다&quot;라는 말에 동의하지 못하는 이유가&amp;nbsp; 여기에 있기도 합니다. 오히려, design이 잘 되지 않는 딥러닝 연구가 많아 질 수 록 딥러닝 or 기계학습 버블이 생겨날 것인데, 이러한 버블을 걷어내는 역할을 의사분들이 하실 수 도 있겠죠.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1600&quot; data-origin-height=&quot;1106&quot; data-filename=&quot;depositphotos_142819793-stock-photo-medical-doctors-at-the-conference.jpg&quot; width=&quot;581&quot; height=&quot;401&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/Nujpy/btrh4oW2Iux/9JWXrcxIRFGnZEFym28dd1/img.jpg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/Nujpy/btrh4oW2Iux/9JWXrcxIRFGnZEFym28dd1/img.jpg&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/Nujpy/btrh4oW2Iux/9JWXrcxIRFGnZEFym28dd1/img.jpg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FNujpy%2Fbtrh4oW2Iux%2F9JWXrcxIRFGnZEFym28dd1%2Fimg.jpg&quot; data-origin-width=&quot;1600&quot; data-origin-height=&quot;1106&quot; data-filename=&quot;depositphotos_142819793-stock-photo-medical-doctors-at-the-conference.jpg&quot; width=&quot;581&quot; height=&quot;401&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;사실 이렇게 design을 하는 영역은 학계의 역할이라고 할 수 있습니다. &lt;b&gt;보통 학계는 오랫동안 domain knowledge를 쌓았기 때문에 어떠한 연구가 유의미한 것인지 판단할 수 있죠.&lt;/b&gt; 그리고, 매일 현재 기술들이 갖는 문제를 제기하고 이를 해결하기 위해 연구하기 때문에 최신 기법 솔루션들은 대부분 학계로부터 나온다고 볼 수 있습니다. 또한, 어떤 데이터를 이용하면 좋은지 알고 있으며, 특히 병원 같은 곳에서는 자체 의료 데이터를 갖고 있기 때문에 design 측면에서 제약사항이 상대적으로 작다고 할 수 있죠.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;구글, 페이스북, 애플, 마이크로소프트, 엔비디아&lt;/b&gt; 등 같은 회사들이 딥러닝에서 좋은 결과를 도출할 수 있었던 것도 모두 &lt;b&gt;학계로부터 도움을 받았&lt;/b&gt;기 때문입니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imagegridblock&quot;&gt;
  &lt;div class=&quot;image-container&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bjnUYh/btrhUi5un9p/KbB5bRSdFMcvlOlFBVWt01/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bjnUYh/btrhUi5un9p/KbB5bRSdFMcvlOlFBVWt01/img.png&quot; data-origin-width=&quot;532&quot; data-origin-height=&quot;514&quot; data-filename=&quot;그림1.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; width=&quot;372&quot; height=&quot;359&quot; style=&quot;width: 50.5674%; margin-right: 10px;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bjnUYh/btrhUi5un9p/KbB5bRSdFMcvlOlFBVWt01/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbjnUYh%2FbtrhUi5un9p%2FKbB5bRSdFMcvlOlFBVWt01%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;532&quot; height=&quot;514&quot;/&gt;&lt;/span&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/d1eJl3/btrhWaZSSyy/enO7w2AH7VMbMBEfR1ONE1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/d1eJl3/btrhWaZSSyy/enO7w2AH7VMbMBEfR1ONE1/img.png&quot; data-origin-width=&quot;576&quot; data-origin-height=&quot;583&quot; data-filename=&quot;그림2.png&quot; width=&quot;319&quot; height=&quot;323&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; style=&quot;width: 48.2698%;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/d1eJl3/btrhWaZSSyy/enO7w2AH7VMbMBEfR1ONE1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fd1eJl3%2FbtrhWaZSSyy%2FenO7w2AH7VMbMBEfR1ONE1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;576&quot; height=&quot;583&quot;/&gt;&lt;/span&gt;&lt;/div&gt;
&lt;/figure&gt;
&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;597&quot; data-origin-height=&quot;576&quot; data-filename=&quot;그림4.png&quot; width=&quot;403&quot; height=&quot;388&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/de9GOX/btrhX1bmgAo/mhRxFxKOPBjk7WY0AMzJu1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/de9GOX/btrhX1bmgAo/mhRxFxKOPBjk7WY0AMzJu1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/de9GOX/btrhX1bmgAo/mhRxFxKOPBjk7WY0AMzJu1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fde9GOX%2FbtrhX1bmgAo%2FmhRxFxKOPBjk7WY0AMzJu1%2Fimg.png&quot; data-origin-width=&quot;597&quot; data-origin-height=&quot;576&quot; data-filename=&quot;그림4.png&quot; width=&quot;403&quot; height=&quot;388&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;1-2. Model development&lt;/b&gt;&lt;/span&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;앞서 실험을 &lt;b&gt;design&lt;/b&gt; 했으면 이를 실제로 &lt;b&gt;실현(실험)&lt;/b&gt;해야 하는&lt;b&gt; 단계&lt;/b&gt;를 거쳐야 합니다. &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;우리가 &lt;b&gt;흔히&lt;/b&gt; 머리속에 그리는 &lt;b&gt;실험&lt;/b&gt;은 &lt;b&gt;관찰&lt;/b&gt;하는 방법입니다. &lt;b&gt;실험군 대조군&lt;/b&gt;을 설정하고 실험장비를 이용하여 &lt;b&gt;지속관찰&lt;/b&gt;하면서 &lt;b&gt;상태&lt;/b&gt;를 &lt;b&gt;기록&lt;/b&gt;하죠.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;하지만, &lt;b&gt;machine learning (or deep learning)&lt;/b&gt;은 대부분 &lt;b&gt;컴퓨터(Turing machine)에 기반하여 연구&lt;/b&gt;를 진행합니다. 그래서, 실험을 design 하기 위해 필요한 이론적 지식 외에도 &lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;컴퓨터를 잘 다룰 수 있는 능력이 필요&lt;/b&gt;&lt;/span&gt;하죠.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;예를 들어보겠습니다. Design 한대로 실험을 하기 위해서는 우선&lt;b&gt; 데이터부터 모아&lt;/b&gt;야 할 것입니다. 딥러닝에서는 레이블이 되어있는 데이터를 사용해야하는 경우가 많기 때문에&lt;b&gt; labelling 프로그램 tool&lt;/b&gt;을 잘 사용할 줄 아는 것이 중요합니다.&amp;nbsp; 만약 &lt;b&gt;데이터가 방대하면 분산처리 시스템 (ex: Hadoop, Spark 등)&lt;/b&gt; 등을 지원해주는 tool도 잘 활용할 줄 알아야 합니다. &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;하지만, 보통 model development 단계에서는 Data를 수집하고 관리하는 단계보다는 아래 그림처럼 model dvelopment, training, devaluation을 하기 위한 별도의 tool들을 사용합니다.&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1302&quot; data-origin-height=&quot;752&quot; data-filename=&quot;그림6.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bXZXFm/btrh7wgvS27/utHv5IHBCGkxKGXGZy9ZdK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bXZXFm/btrh7wgvS27/utHv5IHBCGkxKGXGZy9ZdK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bXZXFm/btrh7wgvS27/utHv5IHBCGkxKGXGZy9ZdK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbXZXFm%2Fbtrh7wgvS27%2FutHv5IHBCGkxKGXGZy9ZdK%2Fimg.png&quot; data-origin-width=&quot;1302&quot; data-origin-height=&quot;752&quot; data-filename=&quot;그림6.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;Model development 단계에서 가장 볼 수 있는 것은 &lt;b&gt;frameworks&lt;/b&gt; 입니다.&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;딥러닝 모델을 개발하기 위해서는 &lt;b&gt;딥러닝 라이브러리를 지원해주는 framework&lt;/b&gt;를 선택해야 합니다. &lt;b&gt;예전&lt;/b&gt;에는 &lt;b&gt;caffe, theano, keras&lt;/b&gt; 등이 사용되었지만, &lt;b&gt;현재&lt;/b&gt;는 &lt;b&gt;pytorch, tensorflow&lt;/b&gt;로 통합되고 있습니다. (물론 최근에 keras가 tensorflow로부터 독립하긴했지만....)&amp;nbsp; &lt;b&gt;최근&lt;/b&gt;에는 &lt;b&gt;Fast.ai, pytorch lighting&lt;/b&gt; 등 좀 더 간편하게 사용하기 위해 제공되는 framework들이 각광을 받고 있습니다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;또한, &lt;b&gt;딥러닝&lt;/b&gt;은 혼자서 개발하는 것보다 &lt;b&gt;협업&lt;/b&gt;하여 &lt;b&gt;개발&lt;/b&gt;하는 것이 &lt;b&gt;훨씬 효율적&lt;/b&gt;입니다. 그렇기 때문에 최대한 이러한 &lt;b&gt;협업&lt;/b&gt;을 &lt;b&gt;지원&lt;/b&gt;해주는 &lt;b&gt;software&lt;/b&gt;를 잘 사용할 줄 아는 것이 중요합니다. 예를 들어, &lt;b&gt;Github&lt;/b&gt;과 같은 software는 여러 사람이 같이 협업할 때 필요한 기능들(ex: 버전관리 기능)을 다수 제공하고 있습니다. 또한, &lt;b&gt;VScode&lt;/b&gt; 같이 좋은 기능이 많이 담긴 IDE를 사용하게 되면 github 연동, interactive mode 기능 등을 잘 사용할 수 있기 때문에 개발이 빨라질 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;딥러닝 연구&lt;/b&gt;에서 구글, 페이스북, 엔비디아 같은 회사의 리서쳐들이 &lt;b&gt;좋은 성과&lt;/b&gt;를 낼 수 있는 이유는 &lt;b&gt;자원&lt;/b&gt;을 &lt;b&gt;최대한 잘 활용&lt;/b&gt;할 수 있기 때문입니다. &lt;b&gt;Docker, Kubeflow&lt;/b&gt; 같은 &lt;b&gt;resource management tool&lt;/b&gt;을 이용하면서 자원 관리를 하고, horovod 분산처리 시스템 또는 Mixed precision과 같은 기술 등을 이용하여 자원을 최대한 활용할 수 있도록 하기 때문에 실험을 빠르고 효율적으로 진행할 수 있었죠.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;또한 딥러닝 &lt;b&gt;실험&lt;/b&gt;들을 &lt;b&gt;관리&lt;/b&gt;해주는 &lt;b&gt;weight&amp;amp;biases&lt;/b&gt; 와 같은 tool을 이용할 수 있으면 다양한 &lt;b&gt;hyper-parameter&lt;/b&gt;들을 &lt;b&gt;automatic&lt;/b&gt;하게 &lt;b&gt;search&lt;/b&gt; 해줄 수 있으며, &lt;b&gt;실험 결과&lt;/b&gt;들을 매우 &lt;b&gt;용이&lt;/b&gt;하게 &lt;b&gt;비교 분석&lt;/b&gt;할 수 있게 되죠 &lt;b&gt;(Experiment Management)&lt;/b&gt;.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;이론적인 design을 하는 것도 매우 중요하지만 이를 구현하고 실험할 수 있는 능력은 또 다른 문제일 수 있습니다. 머리속으로 상상하는 것과 그것을 실현시키는 것이 다른 문제인것 처럼요.&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;1-3. Operations&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;MLOps&lt;/b&gt;의 &lt;b&gt;마지막 단계&lt;/b&gt;라고 할 수 있는 것은 &lt;b&gt;해당 모델&lt;/b&gt;을&lt;b&gt; 배포&lt;/b&gt;하고 &lt;b&gt;운영&lt;/b&gt;하는 단계입니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;자신의 딥러닝 또는 머신러닝 모델을 &lt;b&gt;웹에 배포할지&lt;/b&gt;, &lt;b&gt;핸드폰&lt;/b&gt; 같은 곳&lt;b&gt;에 배포할 지&lt;/b&gt;에 따라서&amp;nbsp; &lt;b&gt;배포 방식&lt;/b&gt;도 &lt;b&gt;다양&lt;/b&gt;합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;보통 배포 및 운영과 관련된 이슈들은 딥러닝 학계에서 주로 신경쓰고 있는 분야는 아닙니다&lt;/span&gt;. 주로 &lt;b&gt;computer engineering 분야&lt;/b&gt;에서 다루어지는 문제들이죠.&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;하지만, 최근에는 &lt;b&gt;학계&lt;/b&gt;에서도 조차 자신들이 만든 &lt;b&gt;모델&lt;/b&gt;을 &lt;b&gt;상용화&lt;/b&gt;시키기 위한 &lt;b&gt;노력&lt;/b&gt;들을 하고 있습니다. 즉, 딥러닝이 단순히 학문적인 연구에 머무르는 것이 아니라, &lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;자신들이 연구하고 개발한 딥러닝 모델이 어떻게 실세계에 영향을 미치는지 보고 싶은 것&lt;/b&gt;&lt;/span&gt;이죠. 결국 이를 위해서는 &lt;b&gt;deployment&lt;/b&gt;와 &lt;b&gt;관련&lt;/b&gt;된 &lt;b&gt;지식&lt;/b&gt;들도 습득할 필요가 있습니다. 예를 들어, &lt;b&gt;microsoft&lt;/b&gt;의 &lt;b&gt;onnx&lt;/b&gt;를 다룰 줄 안다면 배포를 좀 더 쉬운 방법으로 할 수 있겠죠.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;665&quot; data-origin-height=&quot;320&quot; data-filename=&quot;그림7.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/rhMsf/btrh4NXEgHt/BX1eMPWAM5exlp1XyCkkX1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/rhMsf/btrh4NXEgHt/BX1eMPWAM5exlp1XyCkkX1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/rhMsf/btrh4NXEgHt/BX1eMPWAM5exlp1XyCkkX1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FrhMsf%2Fbtrh4NXEgHt%2FBX1eMPWAM5exlp1XyCkkX1%2Fimg.png&quot; data-origin-width=&quot;665&quot; data-origin-height=&quot;320&quot; data-filename=&quot;그림7.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;본래 &lt;b&gt;MLOps&lt;/b&gt;라는 개념은 &lt;b&gt;아래 논문&lt;/b&gt;에서 &lt;b&gt;처음 출현&lt;/b&gt;했다고 합니다. 딥러닝을 연구하다보면 딥러닝 학계의 연구들이 굉장히 많은 기여를 하고 있다고 생각합니다. 물론, 딥러닝 기술을 리딩하는 건 학계에서 출발하는게 대부분이죠. 하지만, ML(or DL) system 관점에서 봤을 때, 딥러닝 모델을 연구를 하고 coding을 하여 구현하는 행위가 기여하는 바가 실세계에서 얼마나 많은 부분을 차지하고 있는지 모를 가능성이 큽니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1074&quot; data-origin-height=&quot;460&quot; data-filename=&quot;제목 없음.png&quot; width=&quot;694&quot; height=&quot;297&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/mbVUB/btrh9RSd2ZD/pacdfYD38s46ndvcRHBZo0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/mbVUB/btrh9RSd2ZD/pacdfYD38s46ndvcRHBZo0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/mbVUB/btrh9RSd2ZD/pacdfYD38s46ndvcRHBZo0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FmbVUB%2Fbtrh9RSd2ZD%2FpacdfYD38s46ndvcRHBZo0%2Fimg.png&quot; data-origin-width=&quot;1074&quot; data-origin-height=&quot;460&quot; data-filename=&quot;제목 없음.png&quot; width=&quot;694&quot; height=&quot;297&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1054&quot; data-origin-height=&quot;370&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/nodgS/btrh7wONVM0/rGmKvktXUVkGWJJYFJh7d1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/nodgS/btrh7wONVM0/rGmKvktXUVkGWJJYFJh7d1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/nodgS/btrh7wONVM0/rGmKvktXUVkGWJJYFJh7d1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FnodgS%2Fbtrh7wONVM0%2FrGmKvktXUVkGWJJYFJh7d1%2Fimg.png&quot; data-origin-width=&quot;1054&quot; data-origin-height=&quot;370&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;사실 최근에는 연구분야 조차 MLOps를 이용하지 않으면 연구성과를 빠르게 낼 수 없다는 이야기를 합니다. 그럼 지금부터 연구자들이 MLOps에 관심을 갖어야할 이유에 대해서 살펴보도록 하겠습니다.&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1280&quot; data-origin-height=&quot;679&quot; data-filename=&quot;1-MLOps-NVIDIA-invert-final.jpg&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/vAu3s/btrh9d9VEXG/xri2ezoLe8uAiZfemBjy50/img.jpg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/vAu3s/btrh9d9VEXG/xri2ezoLe8uAiZfemBjy50/img.jpg&quot; data-alt=&quot;그림 출처:&amp;amp;amp;nbsp;https://blogs.nvidia.co.kr/2020/09/11/what-is-mlops/&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/vAu3s/btrh9d9VEXG/xri2ezoLe8uAiZfemBjy50/img.jpg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FvAu3s%2Fbtrh9d9VEXG%2Fxri2ezoLe8uAiZfemBjy50%2Fimg.jpg&quot; data-origin-width=&quot;1280&quot; data-origin-height=&quot;679&quot; data-filename=&quot;1-MLOps-NVIDIA-invert-final.jpg&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;그림 출처:&amp;nbsp;https://blogs.nvidia.co.kr/2020/09/11/what-is-mlops/&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;2. 머신러닝 or 딥러닝 연구자들이 MLOps에 관심 갖어야 하는 이유&amp;nbsp;&lt;/h3&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;결국 machine learning (or deep learning) 제품을 만든다는 것은 위에서 언급한 3가지 단계 과정을 모두 포함합니다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;학계&lt;/b&gt;에서는 design, model development, deployment와 같은 단계에 대해 크게 신경쓰진 않았습니다. 보통은 수학, 물리학, 생물학 등의 개념들을 잘 이용하여 &lt;b&gt;새로운 딥러닝 이론&lt;/b&gt;을 &lt;b&gt;제시&lt;/b&gt;하여 &lt;b&gt;좋은 논문&lt;/b&gt;을 내는 것이&lt;b&gt; 목적&lt;/b&gt;이었죠. 이러한 연구들이 &lt;b&gt;local 환경에서도 잘 수행되면&lt;/b&gt; 복잡한 tool들이 필요가 없습니다. 그냥 데이터를 다운 받고, 딥러닝 모델을 구현하고 학습하고 evaluation하면 그만이죠.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;하지만, 산업계에서는 현실적용 가능한 연구를 요구합니다. 그래서, &lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;좋은 논문을 냈더라도 그것이 현실에 잘 적용이 되지 않는다면, 딥러닝 기술들이 거품이라는 이야기를 듣게되죠&lt;/b&gt;&lt;/span&gt;. 결국, 딥러닝 연구 역시 상용화를 목적에 두어야 더 환영받는 연구분야가 될 것 입니다. &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;대학원생 역시 졸업을 하고 산업계로 뛰어들 때, 이론만 아는 바보박사 또는 바보석사가 되지 않기 위해 딥러닝과 관련한 다양한 tool에 대해서 알고 있어야 합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;&lt;span&gt;2-1. 석사시절 경험 (2017~2018)&lt;/span&gt;&lt;/b&gt;&lt;span&gt;&lt;/span&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;2017년 &lt;b&gt;대학원 석사 초기(1학기)&lt;/b&gt;에 연구를 하면서 아래와 같은 생각을 했던 적이 있었습니다.&lt;/span&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;딥러닝 이론을 공부하는데도 시간 없는데 MLOps 같은 것 까지 신경쓸 순 없다.&lt;/li&gt;
&lt;li&gt;학계는 학계가 할 일이 따로 있다. 예를 들어, &lt;b&gt;학계는 창조적인 이론을 만들고 기존 이론들을 잘 정립&lt;/b&gt;하는게 주 된 역할이다.&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;위와 같은 생각은 석사를 졸업하면서 아래와 같이 바뀌게 됐습니다.&lt;/span&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;구글, 페이스북, 엔비디아, 마이크로 소프트에서 제출한 논문 결과들이 과연 한 번만 테스트 해본 것일까?&amp;nbsp;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;좋은 결과를 얻기 위해서는 수 많은 실험을 실행할 수 있는 능력을 키워야 하는 것이 아닌가?&lt;/span&gt;&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;내가 자원을 더 잘 활용할 수 있었더라면 더 많은 실험을 할 수 있지 않았을까?&lt;/li&gt;
&lt;li&gt;내가 공부한 내용을 산업계가 관심있어할까?&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;결국 이러한 질문을 통해서 내린 결론을 다음과 같았습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;실험을 설계(design)할 이론 공부와 실험을 빠르게 실행할 개발(development) 공부를 병행해야겠다&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;span style=&quot;font-family: AppleSDGothicNeo-Regular, 'Malgun Gothic', '맑은 고딕', dotum, 돋움, sans-serif;&quot;&gt;&lt;b&gt;2-2. 외부연구원&amp;nbsp; 경험 (2020)&lt;/b&gt;&lt;/span&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;2020년 석사시절 지도해주신 분께서 한국교통대 교수님으로 임용되셔서, 한국교통대 외부연구원으로 지내게 되었습니다. &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;당시 의료인공지능 관련 연구를 하면서 들었던 생각은 기존의 딥러닝 모델을 실험하고 연구하는 방식에 모순이 많이 있다는 점이었습니다.&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;특히, &lt;b&gt;충분한 ablation study를 하지 않고도 논문이 accept&lt;/b&gt;이 되는 등 &lt;b&gt;개인적으로 이해가 안가&lt;/b&gt;는 부분들이 많이 있었죠.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;예를 들어, &lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;learning rate, batch size, random seed 등의 hyper-parameter 값에 따라서 실험 결과가 바뀌기도 하는데, 논문에서 단 한번의 실험으로 1%의 성능이 향상되었다, 2%의 성능이 향상되었다는 결론이 받아드려진다는게 이해가되질 않았습니다.&lt;/b&gt;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그래서, 종종 &lt;b&gt;&quot;딥러닝 실험을 할 때 필수적으로 해야할 ablation study을 왜 하지 않는것이냐?&quot;&lt;/b&gt;라는 질문을 던졌을 때 돌아온 답은 아래와 같았습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;i&gt;&lt;b&gt;&quot;실험할 자원도 없고, 시간도 없으니까요&quot;&lt;/b&gt;&lt;/i&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;이때부터 딥러닝 연구를 위해서 MLOps 지식들이 필요할 수 있겠다라는 생각&lt;/b&gt;을 하게 됐습니다. 특히, M&lt;b&gt;odel development와 관련된 다양한 toolkit들에 관심&lt;/b&gt;을 갖게 되었고, 동시에 아래와 같은 기대를 하게 되었습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt; &quot;이러한 toolkit들을 잘 이용한다면 탄탄한 실험을 빠른 시간내에 할 수 있지 않을까?&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그래서, MLOps에 기반이 되는 GPU 공부를 시작으로 다양한 toolkit (ex: Github, Mixed precision, Horovod, Weight&amp;amp;Biases, Docker 등) 을 알게 됐습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;2-3. 현재 연구실 (2021)&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;현재는 2020년에 배웠던 몇 가지 MLOps 기법들을 적용해 보면서 연구에 많은 도움을 받고 있습니다.&amp;nbsp;&lt;/span&gt;&lt;/b&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;Docker: 연구실 내 고사양 GPU 서버를 효율적으로 사용함&lt;/li&gt;
&lt;li&gt;Github: 개별적으로 개발한 후 통합(integration)하여 버전 관리를 진행&lt;/li&gt;
&lt;li&gt;Weigth&amp;amp;Biases: Hyper-parameter tuning, Experiment management를 자동화 하여 실험 및 결과를 빠르고 탄탄하게 분석 (Efficient ablation study)&lt;/li&gt;
&lt;li&gt;Horovod: 다양한 GPU를 이용하여 효율적인 분산처리 시스템을 구축한 후, 학습 속도를 증가시킴&lt;/li&gt;
&lt;li&gt;Mixed precision: 배웠던 GPU 지식을 활용해 문제없이 mixed precision을 적용하여 inference 속도를 증가시킴&amp;nbsp;&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(시간이 되는대로 배운 개념들을 잘 글로 정리하고, 비교한 실험 결과들을 설명할 예정입니다.)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;3. 내가 생각하는 (연구자들이 특히 관심갖어야 할) MLOps&lt;/h3&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;3-1. GPU 성능의 발전&lt;/span&gt;&lt;/span&gt;&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;b&gt;Samsung, TSMC&lt;/b&gt; 의 &lt;b&gt;나노&lt;/b&gt;&lt;/span&gt;&lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;공정&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt; &lt;/span&gt;&lt;/b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;b&gt;경쟁&lt;/b&gt;이 지속되면서&lt;b&gt; GPU&lt;/b&gt;의 &lt;b&gt;성능&lt;/b&gt;은 계속해서 &lt;b&gt;증가&lt;/b&gt;할 것입니다. 현재&lt;b&gt;(2021.10월 기준)&lt;/b&gt; &lt;b&gt;최신 GPU RTX 30 series&lt;/b&gt;의 나노공정 기술이 &lt;b&gt;8nm&lt;/b&gt;인데, &lt;b&gt;최근(2021.10월 기준)&lt;/b&gt;에는 &lt;b&gt;NVIDIA가 TSMC의 5nm 공정을 이용&lt;/b&gt;해 새로운&lt;b&gt; GPU 모델&lt;/b&gt;을 &lt;b&gt;생산&lt;/b&gt;할 것이다라는 이야기가 돌고 있죠.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;만약, 개인 연구자들이 몇 개의 GPU를 살 수 있게 가격이 조정되고, 더 좋은 GPU가 나온다면&amp;nbsp; GPU, TPU를 극대화 하는 기술들(ex: 분산처리 시스템, Tensor core 이용 기술)을 잘만 이용하면 누구나 좋은 실험을 할 수 있을 것입니다. 예&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;를 들어, 앞으로 &lt;b&gt;horovod와 같이 좀 더 발전된 분산시스템&lt;/b&gt;을 잘 이용할 수 있다면&lt;b&gt; 발전된 GPU 자원을 극대화&lt;/b&gt; 하여 연구성과를 내는데 큰 일조를 할 수 있을 것입니다.&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;즉, &lt;span style=&quot;color: #000000;&quot;&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;우리가 감히 상상도 해보지 못했던 실험들을 할 수 있을 것이고, 방대한 양의 실험들을 통해 좀 더 근거있는 연구 결과를 선보일 수 있을 것입니다.&lt;/b&gt;&lt;/span&gt;&lt;span&gt; 그런데 지금부터 준비하지 않는다면 결국 나중에 &lt;/span&gt;&lt;/span&gt;다가오는 기회를 놓칠 가능성이 많아지겠죠.&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;3-2. Auto ML (Feat. Feature Engineering, HPO, NAS)&lt;/span&gt;&lt;/span&gt;&lt;/b&gt;&lt;/h4&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Demilight', 'Noto Sans KR';&quot;&gt;[3-2-1. NAS (Neural Architecture Search)]&lt;/span&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Demilight', 'Noto Sans KR';&quot;&gt;&lt;b&gt;보통 ML (or DL) 연구자&lt;/b&gt;들이 하는 많은 일들 중 하나는 &lt;b&gt;새로운 ML(or DL) 모델&lt;/b&gt;을 &lt;b&gt;고안&lt;/b&gt;하고 구현하는 것입니다. 하지만, 모델을 고안하고 연구하는 것은 쉬운일이 아닙니다. 왜냐하면 특정 모델을 구현하는데 &lt;b&gt;고려해야할 요소&lt;/b&gt;도 굉장히 &lt;b&gt;많으&lt;/b&gt;며, 해당 &lt;b&gt;모델이 커질 수록&lt;/b&gt; 사용해야할 &lt;b&gt;자원&lt;/b&gt;도 &lt;b&gt;많아&lt;/b&gt;지고 &lt;b&gt;시간&lt;/b&gt;도 &lt;b&gt;오래&lt;/b&gt; 걸리기 때문입니다. 그래서, 딥러닝이 풀어야할 문제에 &lt;b&gt;최적화 된 모델&lt;/b&gt;을 &lt;b&gt;자동&lt;/b&gt;으로 &lt;b&gt;만들&lt;/b&gt;어주는 &lt;b&gt;NAS&lt;/b&gt;라는 &lt;b&gt;기술&lt;/b&gt;이 계속 &lt;b&gt;연구&lt;/b&gt;되고 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Demilight', 'Noto Sans KR';&quot;&gt;[3-2-2. Feature Engineering]&lt;/span&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Demilight', 'Noto Sans KR';&quot;&gt;보통 산업계에서는 새로운 ML(or DL) 모델을 고안하지 않는 것 보다, 기존 모델의 성능을 유지보수 하고 빠른 시간내에 향상시키는 방법을 선호합니다. &lt;b&gt;최근&lt;/b&gt;에는 &lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;모델링을 중점적으로 하는 model-centric view 연구보다 데이터의 퀄리티를 향상 시키는 data-centric view 연구가 더 많은 관심&lt;/b&gt;&lt;/span&gt;을 받고 있습니다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Demilight', 'Noto Sans KR';&quot;&gt;(&amp;darr;&lt;span&gt;&amp;darr;&lt;/span&gt;&lt;span&gt;&amp;darr; Data Centric AI를 강조하는 Andrew ng&lt;span&gt;&amp;darr;&lt;/span&gt;&lt;span&gt;&amp;darr;&lt;/span&gt;&lt;span&gt;&amp;darr;&lt;/span&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;figure data-ke-type=&quot;video&quot; data-ke-style=&quot;alignCenter&quot; data-video-host=&quot;youtube&quot; data-video-url=&quot;https://www.youtube.com/watch?v=06-AZXmwHjo&quot; data-video-thumbnail=&quot;https://scrap.kakaocdn.net/dn/cNTTQr/hyL0Zf3KXn/wSXT7aKYAOTCNFjIHOpuAk/img.jpg?width=1280&amp;amp;height=720&amp;amp;face=1062_138_1232_324&quot; data-video-width=&quot;860&quot; data-video-height=&quot;484&quot; data-video-origin-width=&quot;860&quot; data-video-origin-height=&quot;484&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;iframe src=&quot;https://www.youtube.com/embed/06-AZXmwHjo&quot; width=&quot;860&quot; height=&quot;484&quot; frameborder=&quot;&quot; allowfullscreen=&quot;true&quot;&gt;&lt;/iframe&gt;
&lt;figcaption&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Demilight', 'Noto Sans KR';&quot;&gt;해당내용을 요약하면 아래와 같습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;Model-centric view
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;모델이 좋으면 조금 품질이 떨어지는 데이터로 학습해도 잘 동작한다는 관점&lt;/li&gt;
&lt;li&gt;Good model &amp;rarr; Robust for various dataset&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Data-centric view
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;모델링을 하더라도 데이터를 기반으로 만들어야 하는 것이 맞는 방향&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;좋은 모델을 만드는 것보다 좋은 데이터를 만드는 것이 딥러닝 모델 성능에 더 크게 기여&lt;/b&gt;&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;대부분의 작업은 데이터를 수집하고 가공하는 일&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;[&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;생산된 철 제품 중 불량을 검출하는 프로젝트&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;]&lt;/span&gt;&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;첫 시도(=baseline)에서는 76.2% 검출율을 보임&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;90%&lt;span style=&quot;font-size: 1.12em; letter-spacing: 0px; color: #000000;&quot;&gt; &lt;/span&gt;&lt;span style=&quot;font-size: 1.12em; letter-spacing: 0px; color: #000000;&quot;&gt;accuracy&lt;/span&gt;&lt;span style=&quot;font-size: 1.12em; letter-spacing: 0px; color: #000000;&quot;&gt;까지 올리는 것이 목표&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;두 그룹으로 나눠서 프로젝트를 진행함&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;Model centric &lt;span style=&quot;font-size: 1.12em; letter-spacing: 0px; color: #000000;&quot;&gt;그룹&lt;/span&gt;&lt;span style=&quot;font-size: 1.12em; letter-spacing: 0px; color: #000000;&quot;&gt;: &lt;/span&gt;&lt;span style=&quot;font-size: 1.12em; letter-spacing: 0px; color: #000000;&quot;&gt;딥러닝 모델 연구&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;Data-centric &lt;span style=&quot;font-size: 1.12em; letter-spacing: 0px; color: #000000;&quot;&gt;그룹&lt;/span&gt;&lt;span style=&quot;font-size: 1.12em; letter-spacing: 0px; color: #000000;&quot;&gt;: data&lt;/span&gt;&lt;span style=&quot;font-size: 1.12em; letter-spacing: 0px; color: #000000;&quot;&gt; &lt;/span&gt;&lt;span style=&quot;font-size: 1.12em; letter-spacing: 0px; color: #000000;&quot;&gt;전처리&lt;/span&gt;&lt;span style=&quot;font-size: 1.12em; letter-spacing: 0px; color: #000000;&quot;&gt; &lt;/span&gt;&lt;span style=&quot;font-size: 1.12em; letter-spacing: 0px; color: #000000;&quot;&gt;(cleansing), &lt;/span&gt;&lt;span style=&quot;font-size: 1.12em; letter-spacing: 0px; color: #000000;&quot;&gt;질 좋은 데이터 선별&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;Data-centric &lt;span style=&quot;letter-spacing: 0px; color: #000000;&quot;&gt;그룹에서 성능향상이 뚜렷하게 나타남&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1184&quot; data-origin-height=&quot;455&quot; data-filename=&quot;그림9.png&quot; width=&quot;573&quot; height=&quot;220&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dcCjyQ/btrh95jdgeJ/VtGzIE9s34jbAgprA6jho0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dcCjyQ/btrh95jdgeJ/VtGzIE9s34jbAgprA6jho0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dcCjyQ/btrh95jdgeJ/VtGzIE9s34jbAgprA6jho0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdcCjyQ%2Fbtrh95jdgeJ%2FVtGzIE9s34jbAgprA6jho0%2Fimg.png&quot; data-origin-width=&quot;1184&quot; data-origin-height=&quot;455&quot; data-filename=&quot;그림9.png&quot; width=&quot;573&quot; height=&quot;220&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;앤드류 응은 아래와 같이 딥러닝 연구 트렌드가 변해야 산업의 요구에 빠르게 답할 수 있다고 주장한듯한 느낌을 받았습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;686&quot; data-origin-height=&quot;803&quot; data-filename=&quot;그림10.png&quot; width=&quot;441&quot; height=&quot;516&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bALEyY/btribzxCRgG/HfZyYeAHu57O6B62C2ti31/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bALEyY/btribzxCRgG/HfZyYeAHu57O6B62C2ti31/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bALEyY/btribzxCRgG/HfZyYeAHu57O6B62C2ti31/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbALEyY%2FbtribzxCRgG%2FHfZyYeAHu57O6B62C2ti31%2Fimg.png&quot; data-origin-width=&quot;686&quot; data-origin-height=&quot;803&quot; data-filename=&quot;그림10.png&quot; width=&quot;441&quot; height=&quot;516&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #3e3e40;&quot;&gt;&lt;b&gt;ML&lt;/b&gt;에서 &lt;b&gt;data-centric 연구&lt;/b&gt;는 &lt;b&gt;feature engineering&lt;/b&gt; 분야와 많은 관계가 있습니다.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;아래 강의를 보면 raw data에서 어떻게 good data를 선별하는지에 대해 이해하실 수 있으니 참고하시길 바랍니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;(&amp;darr;&amp;darr;&amp;darr; 아래에서 4강 수업이 feature engineering, 5강이 Art and Science of ML &amp;darr;&amp;darr;&amp;darr;)&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;a href=&quot;https://www.coursera.org/specializations/machine-learning-tensorflow-gcp#courses&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.coursera.org/specializations/machine-learning-tensorflow-gcp#courses&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1634559501548&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;website&quot; data-og-title=&quot;Machine Learning with TensorFlow on Google Cloud&quot; data-og-description=&quot;Google 클라우드에서 제공합니다. Learn ML with Google Cloud. Real-world experimentation with end-to-end ML. 무료로 등록하십시오.&quot; data-og-host=&quot;www.coursera.org&quot; data-og-source-url=&quot;https://www.coursera.org/specializations/machine-learning-tensorflow-gcp#courses&quot; data-og-url=&quot;https://www.coursera.org/specializations/machine-learning-tensorflow-gcp&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/gGIc6/hyL03PLcxi/aksA4uzCokV0XDjZIRCMkk/img.jpg?width=1772&amp;amp;height=928&amp;amp;face=0_0_1772_928,https://scrap.kakaocdn.net/dn/98SjF/hyL0USO2f9/An10sKsFv5kqiDUwIocsdK/img.jpg?width=1772&amp;amp;height=928&amp;amp;face=0_0_1772_928&quot;&gt;&lt;a href=&quot;https://www.coursera.org/specializations/machine-learning-tensorflow-gcp#courses&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://www.coursera.org/specializations/machine-learning-tensorflow-gcp#courses&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/gGIc6/hyL03PLcxi/aksA4uzCokV0XDjZIRCMkk/img.jpg?width=1772&amp;amp;height=928&amp;amp;face=0_0_1772_928,https://scrap.kakaocdn.net/dn/98SjF/hyL0USO2f9/An10sKsFv5kqiDUwIocsdK/img.jpg?width=1772&amp;amp;height=928&amp;amp;face=0_0_1772_928');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;Machine Learning with TensorFlow on Google Cloud&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;Google 클라우드에서 제공합니다. Learn ML with Google Cloud. Real-world experimentation with end-to-end ML. 무료로 등록하십시오.&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;www.coursera.org&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;(&amp;darr;&amp;darr;&amp;darr;Feature Engineering 관련 설명 &amp;darr;&amp;darr;&amp;darr;)&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;a href=&quot;https://taeu.github.io/coursera/deeplearning-coursera-featrue-engineering/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://taeu.github.io/coursera/deeplearning-coursera-featrue-engineering/&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1634603132136&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;[Coursera] 데이터 전처리 : Feature Engineering - Machine Learning with Tensorflow on Google Cloud Platform&quot; data-og-description=&quot;목표&quot; data-og-host=&quot;taeu.github.io&quot; data-og-source-url=&quot;https://taeu.github.io/coursera/deeplearning-coursera-featrue-engineering/&quot; data-og-url=&quot;https://taeu.github.io/coursera/deeplearning-coursera-featrue-engineering/&quot; data-og-image=&quot;&quot;&gt;&lt;a href=&quot;https://taeu.github.io/coursera/deeplearning-coursera-featrue-engineering/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://taeu.github.io/coursera/deeplearning-coursera-featrue-engineering/&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url();&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;[Coursera] 데이터 전처리 : Feature Engineering - Machine Learning with Tensorflow on Google Cloud Platform&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;목표&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;taeu.github.io&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;[Note]&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light'; color: #ee2323;&quot;&gt;보통 딥러닝에서는 유의미한 feature를 뽑아주는 feature engineering 역할을 DNN이 수행합니다. 아마 이 부분이 MLOps와 DLOps 를 구분짓게 하는 요소가 될 수 있을 것 같네요.&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1021&quot; data-origin-height=&quot;564&quot; data-filename=&quot;f2.jpg&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/buEBBT/btrh1Q1XqK5/lilUXswendzDTtTa7T1Qkk/img.jpg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/buEBBT/btrh1Q1XqK5/lilUXswendzDTtTa7T1Qkk/img.jpg&quot; data-alt=&quot;그림 출처:&amp;amp;amp;nbsp;https://cacm.acm.org/magazines/2020/1/241703-techniques-for-interpretable-machine-learning/fulltext&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/buEBBT/btrh1Q1XqK5/lilUXswendzDTtTa7T1Qkk/img.jpg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbuEBBT%2Fbtrh1Q1XqK5%2FlilUXswendzDTtTa7T1Qkk%2Fimg.jpg&quot; data-origin-width=&quot;1021&quot; data-origin-height=&quot;564&quot; data-filename=&quot;f2.jpg&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;그림 출처:&amp;nbsp;https://cacm.acm.org/magazines/2020/1/241703-techniques-for-interpretable-machine-learning/fulltext&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Demilight', 'Noto Sans KR';&quot;&gt;[3-2-3. Hyper-Parameter Optimization (HPO)]&lt;/span&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;앞서 언급했듯 &lt;b&gt;딥러닝&lt;/b&gt;에서는 &lt;b&gt;결과&lt;/b&gt;에 &lt;b&gt;영향&lt;/b&gt;을 미치는&lt;b&gt; 다양한 hyperparameter&lt;/b&gt;들 (&lt;b&gt;ex: batch size, learning rate, random seed 등&lt;/b&gt;)이 존재합니다. 그래서 보통 hyper-parameter들에 대한 ablation study가 따로 진행되기도 하죠. 만약에 &lt;b&gt;다양한 hyper-parameter 조합을 학습이 끝날 때 마다&lt;span style=&quot;color: #ee2323;&quot;&gt; manually 설정&lt;/span&gt;해주면 굉장한 시간이 소요&lt;/b&gt;될 것 입니다. 예를 들어, 'batch=16, learning=0.1'이라고 설정한 학습이 끝났으면, 'batch=32, learning=0.01'로 설정하고 다시 학습시켜야 하는데, 언제 학습이 끝났는지도 모르고 (물론 요즘은 알람기능이 있긴 합니다만..), 이걸 하나씩 manually 바꿔주는 것도 여간 귀찮은 일이 아닙니다. &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;또한, hyper-parameter search 방식도 random search, grid search, Bayesian optimization, Tree&lt;span style=&quot;color: #4d5156;&quot;&gt;-structured Parzen Estimators algorithm 등 너무나도 많기 때문에 manually 설정하여 학습한다면 많은 시간이 소요되죠. 그래서, &lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;보통 weight&amp;amp;biases 를 통해 자동으로 hyper-parameter 값을 setting 해주어 테스트하고 결과를 visualization 해주는 tool을 이용&lt;/b&gt;&lt;/span&gt;하기도 합니다.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span&gt;지금까지 설명한 내용들을 보통 AutoML이라는 영역안에 포함시키기도 합니다. 즉, AutoML이 잘 작동하면 할 수록 지금처럼 manually 모델링, manually hyper-parameter search 등을 할 필요가 없어지겠죠. 결국, Auto ML에 필요한 software가 MLOps tool로써 나오게 된다면 지금까지 딥러닝을 연구해왔던 방향이 매우 달라질 수 있을 겁니다.&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;font-family: 'Noto Sans Demilight', 'Noto Sans KR';&quot;&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;864&quot; data-origin-height=&quot;750&quot; data-filename=&quot;그림8.png&quot; width=&quot;600&quot; height=&quot;521&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/m8S0g/btrh8DmKbpk/WObxjBENmFqRZeJlKvCBqK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/m8S0g/btrh8DmKbpk/WObxjBENmFqRZeJlKvCBqK/img.png&quot; data-alt=&quot;그림 출처:&amp;amp;amp;nbsp;https://ettrends.etri.re.kr/ettrends/178/0905178004/&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/m8S0g/btrh8DmKbpk/WObxjBENmFqRZeJlKvCBqK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fm8S0g%2Fbtrh8DmKbpk%2FWObxjBENmFqRZeJlKvCBqK%2Fimg.png&quot; data-origin-width=&quot;864&quot; data-origin-height=&quot;750&quot; data-filename=&quot;그림8.png&quot; width=&quot;600&quot; height=&quot;521&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;그림 출처:&amp;nbsp;https://ettrends.etri.re.kr/ettrends/178/0905178004/&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;4. MLOps 관련 유용한 자료&lt;/h3&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;UCBerkely 대학&lt;/b&gt;은 &lt;b&gt;boot campus&lt;/b&gt;를 통해 &lt;span style=&quot;color: #ee2323;&quot;&gt;&lt;b&gt;MLOps를 잘 다루는 것이 ML practioners에게 왜 중요한지 설&lt;/b&gt;&lt;/span&gt;명하고 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://fullstackdeeplearning.com/spring2021/lecture-6/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://fullstackdeeplearning.com/spring2021/lecture-6/&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1634543518622&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;website&quot; data-og-title=&quot;Full Stack Deep Learning&quot; data-og-description=&quot;Hands-on program for software developers familiar with the basics of deep learning seeking to expand their skills.&quot; data-og-host=&quot;fullstackdeeplearning.com&quot; data-og-source-url=&quot;https://fullstackdeeplearning.com/spring2021/lecture-6/&quot; data-og-url=&quot;https://fullstackdeeplearning.com&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/bDOt2j/hyL0SNYT9P/nLvaKHNB5aAEvoEvfB90Qk/img.png?width=860&amp;amp;height=450&amp;amp;face=764_201_832_276,https://scrap.kakaocdn.net/dn/5sdWQ/hyLZQK0pQ8/nceUGrtRVl2kvRo22YFMK0/img.png?width=860&amp;amp;height=450&amp;amp;face=764_201_832_276&quot;&gt;&lt;a href=&quot;https://fullstackdeeplearning.com/spring2021/lecture-6/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://fullstackdeeplearning.com/spring2021/lecture-6/&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/bDOt2j/hyL0SNYT9P/nLvaKHNB5aAEvoEvfB90Qk/img.png?width=860&amp;amp;height=450&amp;amp;face=764_201_832_276,https://scrap.kakaocdn.net/dn/5sdWQ/hyLZQK0pQ8/nceUGrtRVl2kvRo22YFMK0/img.png?width=860&amp;amp;height=450&amp;amp;face=764_201_832_276');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;Full Stack Deep Learning&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;Hands-on program for software developers familiar with the basics of deep learning seeking to expand their skills.&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;fullstackdeeplearning.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;위의 링크는&lt;b&gt; boot campus&lt;/b&gt;에서 다루는 여러 lecture 중 &lt;b&gt;&quot;Lecture 6: MLOps Infrastructure &amp;amp; Tooling&quot; 파트&lt;/b&gt;인데, ML practioners에게 아래와 같은 지식이 중요하다는 언급을 하고 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;Software Engineering
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;ANACONDA&lt;/li&gt;
&lt;li&gt;VScode&lt;/li&gt;
&lt;li&gt;CUDA&lt;/li&gt;
&lt;li&gt;Python&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Compute Hardware
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;NVIDIA GPU model&lt;/li&gt;
&lt;li&gt;Cloud Options (ex: AWS, GCP, Azure)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Resource Management
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Docker&lt;/li&gt;
&lt;li&gt;Kubernetes/Kubeflow&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Frameworks
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Pytorch (lighting)&lt;/li&gt;
&lt;li&gt;Tensorflow&lt;/li&gt;
&lt;li&gt;Keras&lt;/li&gt;
&lt;li&gt;fast.ai&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Distributed Training
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Horovod&lt;/li&gt;
&lt;li&gt;Ray&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Experiment Management
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Weight&amp;amp;Biases&lt;/li&gt;
&lt;li&gt;TensorBoard&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Hyperparameter Tuining
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Weight&amp;amp;Biases: Sweep&lt;/li&gt;
&lt;li&gt;Ray Tune&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;(&amp;darr;&amp;darr;&amp;darr; UC Berkeley boot campus lecture note &amp;darr;&amp;darr;&amp;darr;)&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://fullstackdeeplearning.com/spring2021/lecture-6/&quot;&gt;https://fullstackdeeplearning.com/spring2021/lecture-6/&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1634558926221&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;website&quot; data-og-title=&quot;Full Stack Deep Learning&quot; data-og-description=&quot;Hands-on program for software developers familiar with the basics of deep learning seeking to expand their skills.&quot; data-og-host=&quot;fullstackdeeplearning.com&quot; data-og-source-url=&quot;https://fullstackdeeplearning.com/spring2021/lecture-6/&quot; data-og-url=&quot;https://fullstackdeeplearning.com&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/bDOt2j/hyL0SNYT9P/nLvaKHNB5aAEvoEvfB90Qk/img.png?width=860&amp;amp;height=450&amp;amp;face=764_201_832_276,https://scrap.kakaocdn.net/dn/5sdWQ/hyLZQK0pQ8/nceUGrtRVl2kvRo22YFMK0/img.png?width=860&amp;amp;height=450&amp;amp;face=764_201_832_276&quot;&gt;&lt;a href=&quot;https://fullstackdeeplearning.com/spring2021/lecture-6/&quot; data-source-url=&quot;https://fullstackdeeplearning.com/spring2021/lecture-6/&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/bDOt2j/hyL0SNYT9P/nLvaKHNB5aAEvoEvfB90Qk/img.png?width=860&amp;amp;height=450&amp;amp;face=764_201_832_276,https://scrap.kakaocdn.net/dn/5sdWQ/hyLZQK0pQ8/nceUGrtRVl2kvRo22YFMK0/img.png?width=860&amp;amp;height=450&amp;amp;face=764_201_832_276');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;Full Stack Deep Learning&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;Hands-on program for software developers familiar with the basics of deep learning seeking to expand their skills.&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;fullstackdeeplearning.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;지금까지 MLOps에 관한 개념 및 개인적인 생각에 대한 글을 작성해봤습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;다음 글 부터는 실제로 사용하고 있는 여러 Tool들을 소개해보도록 하겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;[Reference]&lt;/b&gt;&lt;b&gt;&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;&lt;a href=&quot;https://fullstackdeeplearning.com/spring2021/lecture-6/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://fullstackdeeplearning.com/spring2021/lecture-6/&lt;/a&gt;&lt;/b&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1634604545010&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;website&quot; data-og-title=&quot;Full Stack Deep Learning&quot; data-og-description=&quot;Hands-on program for software developers familiar with the basics of deep learning seeking to expand their skills.&quot; data-og-host=&quot;fullstackdeeplearning.com&quot; data-og-source-url=&quot;https://fullstackdeeplearning.com/spring2021/lecture-6/&quot; data-og-url=&quot;https://fullstackdeeplearning.com&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/b5Ukl3/hyL0VShGNV/W7U0ciwkrt4OUZI54Y2CJ1/img.png?width=860&amp;amp;height=450&amp;amp;face=764_201_832_276,https://scrap.kakaocdn.net/dn/2U0IG/hyL02DRhvm/5lNF1zVxPndO1W9Kj2KwqK/img.png?width=860&amp;amp;height=450&amp;amp;face=764_201_832_276&quot;&gt;&lt;a href=&quot;https://fullstackdeeplearning.com/spring2021/lecture-6/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://fullstackdeeplearning.com/spring2021/lecture-6/&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/b5Ukl3/hyL0VShGNV/W7U0ciwkrt4OUZI54Y2CJ1/img.png?width=860&amp;amp;height=450&amp;amp;face=764_201_832_276,https://scrap.kakaocdn.net/dn/2U0IG/hyL02DRhvm/5lNF1zVxPndO1W9Kj2KwqK/img.png?width=860&amp;amp;height=450&amp;amp;face=764_201_832_276');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;Full Stack Deep Learning&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;Hands-on program for software developers familiar with the basics of deep learning seeking to expand their skills.&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;fullstackdeeplearning.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;b&gt;&lt;a href=&quot;https://www.coursera.org/specializations/machine-learning-tensorflow-gcp#courses%EF%BB%BF&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.coursera.org/specializations/machine-learning-tensorflow-gcp#courses%EF%BB%BF&lt;/a&gt;&lt;/b&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1634604434330&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;website&quot; data-og-title=&quot;Machine Learning with TensorFlow on Google Cloud&quot; data-og-description=&quot;Google 클라우드에서 제공합니다. Learn ML with Google Cloud. Real-world experimentation with end-to-end ML. 무료로 등록하십시오.&quot; data-og-host=&quot;www.coursera.org&quot; data-og-source-url=&quot;https://www.coursera.org/specializations/machine-learning-tensorflow-gcp#courses%EF%BB%BF&quot; data-og-url=&quot;https://www.coursera.org/specializations/machine-learning-tensorflow-gcp&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/cTPiEn/hyL04ho0gp/QeRfdKGNq9qBRLhgIQqHck/img.jpg?width=1772&amp;amp;height=928&amp;amp;face=0_0_1772_928,https://scrap.kakaocdn.net/dn/cDiCAo/hyL0VEI8MO/6gyBpISg5h0VpLTAguwY9K/img.jpg?width=1772&amp;amp;height=928&amp;amp;face=0_0_1772_928&quot;&gt;&lt;a href=&quot;https://www.coursera.org/specializations/machine-learning-tensorflow-gcp#courses%EF%BB%BF&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://www.coursera.org/specializations/machine-learning-tensorflow-gcp#courses%EF%BB%BF&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/cTPiEn/hyL04ho0gp/QeRfdKGNq9qBRLhgIQqHck/img.jpg?width=1772&amp;amp;height=928&amp;amp;face=0_0_1772_928,https://scrap.kakaocdn.net/dn/cDiCAo/hyL0VEI8MO/6gyBpISg5h0VpLTAguwY9K/img.jpg?width=1772&amp;amp;height=928&amp;amp;face=0_0_1772_928');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;Machine Learning with TensorFlow on Google Cloud&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;Google 클라우드에서 제공합니다. Learn ML with Google Cloud. Real-world experimentation with end-to-end ML. 무료로 등록하십시오.&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;www.coursera.org&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://ml-ops.org/content/mlops-principles&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://ml-ops.org/content/mlops-principles&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1634518252621&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;website&quot; data-og-title=&quot;ml-ops.org&quot; data-og-description=&quot;Machine Learning Operations&quot; data-og-host=&quot;ml-ops.org&quot; data-og-source-url=&quot;https://ml-ops.org/content/mlops-principles&quot; data-og-url=&quot;https://ml-ops.org/&quot; data-og-image=&quot;&quot;&gt;&lt;a href=&quot;https://ml-ops.org/content/mlops-principles&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://ml-ops.org/content/mlops-principles&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url();&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;ml-ops.org&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;Machine Learning Operations&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;ml-ops.org&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>MLOPs</category>
      <category>MLOps</category>
      <author>Do-Woo-Ner</author>
      <guid isPermaLink="true">https://89douner.tistory.com/337</guid>
      <comments>https://89douner.tistory.com/337#entry337comment</comments>
      <pubDate>Mon, 18 Oct 2021 11:03:47 +0900</pubDate>
    </item>
    <item>
      <title>6. AI hub란? (Feat. 디지털 뉴딜 그리고 공공 의료데이터)</title>
      <link>https://89douner.tistory.com/307</link>
      <description>&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;안녕하세요.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;이번 글에서는 &lt;b&gt;디지털 뉴딜&lt;/b&gt; 사업의 일환인 &lt;b&gt;AI hub&lt;/b&gt;를 소개하려고 합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;최근 &lt;b&gt;딥러닝 학습&lt;/b&gt;을 하기 위해 &lt;b&gt;양질의 데이터가&lt;/b&gt; 절실히 필요한 상황입니다. &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;특히, &lt;b&gt;의료 데이터&lt;/b&gt;의 경우에는&lt;b&gt; 우수 인력&lt;/b&gt;들이 동원되어야&lt;b&gt; 학습데이터&lt;/b&gt;를 구축 할 수 있습니다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그래서 &lt;b&gt;정부&lt;/b&gt;에서는 &lt;b&gt;대규모 투자&lt;/b&gt;를 하여 &lt;b&gt;양질의 데이터&lt;/b&gt;를&lt;b&gt; 생산&lt;/b&gt;할 수 있게 지원하고 있는데, &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그 중의 하나가 &lt;b&gt;디지털 뉴딜 사업&lt;/b&gt;이고, 이러한 디지털 뉴딜 사업의 일환이 &lt;b&gt;AI hub&lt;/b&gt;입니다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그래서 딥러닝을 공부하고 모델을 개발하는 사람들이 데이터를 구하는데 어려움이 없도록 &lt;b&gt;AI hub&lt;/b&gt;에 &lt;b&gt;신청&lt;/b&gt;하면 &lt;b&gt;다양한 데이터 (ex: 공공의료데이터, 위성데이터 등)&lt;/b&gt; 를&lt;b&gt; 제공&lt;/b&gt; 받을 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;이번 글에서는 위에서 설명한 내용들을 좀 더 구체적으로 알아보도록 하려고 합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;디지털 뉴딜을 설명하기에 앞서 뉴딜이 무엇이고, 한국판 뉴딜이 무엇인지 먼저 알아본 후 디지털 뉴딜과 AI hub에 대해서 설명하도록 하겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;&lt;span style=&quot;font-family: AppleSDGothicNeo-Regular, 'Malgun Gothic', '맑은 고딕', dotum, 돋움, sans-serif;&quot;&gt;1. 뉴딜이 무엇인가요?&lt;/span&gt;&lt;/h3&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;루즈벨트&lt;/b&gt;를 미국 역사상 최초로 4번 연속 대통령으로 이끌었던 &lt;b&gt;정부주도 사업&lt;/b&gt;이 &lt;b&gt;뉴딜정책&lt;/b&gt;입니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;이 뉴딜 정책을 설명하기 위해 잠시 짧게 &lt;b&gt;역사적인 배경&lt;/b&gt;을 설명하도록 하겠습니다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;1) 1865년 미국: 남북전쟁 종료&lt;/span&gt;&lt;/h4&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;1865년 미국의 남북전쟁이 끝나면서 &lt;b&gt;통합된 정부&lt;/b&gt;가 수립이 됩니다.&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;2) 1865~1918년 미국: 미국 재건 시작&lt;/span&gt;&lt;/h4&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;이 시기에&lt;b&gt; 유럽&lt;/b&gt;을 중심으로 다른 대륙에서 &lt;b&gt;2750만 명&lt;/b&gt;이라는 &lt;b&gt;이민자&lt;/b&gt;들이 &lt;b&gt;미국&lt;/b&gt;으로 몰려들어오게 됩니다.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;이러한 &lt;b&gt;이민자&lt;/b&gt;들 덕분에 &lt;b&gt;노동력&lt;/b&gt;을 공급 받을 수 있었고, 캘리포니아와 같이 개발되지 않은 지역에&lt;b&gt; 다양한 지역 사회&lt;/b&gt;를 &lt;b&gt;형성&lt;/b&gt;할 수 있었습니다.&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;3) 1920년 미국: 미국 경제 호황&lt;/span&gt;&lt;/h4&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;1918년 1차 세계대전&lt;/b&gt;이 영국, 러시아, 프랑스 &lt;b&gt;연합국&lt;/b&gt;의 &lt;b&gt;승리&lt;/b&gt;로 끝났습니다.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;당시 영국은 러시아와 프랑스에 엄청난 돈을 빌려준 상태였습니다.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;하지만, &lt;b&gt;영국&lt;/b&gt; 역시 전쟁 중 이었기 때문에 부족한 돈을 &lt;b&gt;미국 금융가&lt;/b&gt;에게서 빌리게 됩니다.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;영국은 미국에게 빌린 돈을 패전국인 독일에게 막대한 배상금을 주며 받으려 했습니다.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;물론, 러시아와 프랑스에게 빌려줬던 돈도 받아 미국에게 주려고 했습니다.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;하지만, 유럽의 경제는 이미 파탄이 난 상태였기 때문에 &lt;b&gt;영국과 미국 모두 돈을 받기 힘든 상태&lt;/b&gt;가 되었습니다.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그래서, &lt;b&gt;미국&lt;/b&gt;은 &lt;b&gt;유럽&lt;/b&gt; 경제를 살리기 위해 &lt;b&gt;대규모 투&lt;/b&gt;자를 하여 공장을 늘리고 &lt;b&gt;많은 일자리(노동자)&lt;/b&gt;를 만들어 냈습니다. 이로 인해 &lt;b&gt;투자&lt;/b&gt;가&lt;b&gt; 활발&lt;/b&gt;해 지면서 &lt;b&gt;미국&lt;/b&gt; 또한 &lt;b&gt;대규모 경제 호황&lt;/b&gt;을 맞이하게 됐죠&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(&amp;darr;&amp;darr;&amp;darr; 미국의 1920년대 배경을 잘 설명해주는 동영상&amp;darr;&amp;darr;&amp;darr;)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=xatt7AQPnMc&quot;&gt;https://www.youtube.com/watch?v=xatt7AQPnMc&lt;/a&gt;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;figure data-ke-type=&quot;video&quot; data-ke-style=&quot;alignCenter&quot; data-video-host=&quot;youtube&quot; data-video-url=&quot;https://www.youtube.com/watch?v=xatt7AQPnMc&quot; data-video-thumbnail=&quot;https://scrap.kakaocdn.net/dn/AuGfr/hyK3F2Soqb/ek1GkswvSJPJQgk7QbNTJk/img.jpg?width=1280&amp;amp;height=720&amp;amp;face=176_44_1082_336&quot; data-video-width=&quot;860&quot; data-video-height=&quot;484&quot; data-video-origin-width=&quot;860&quot; data-video-origin-height=&quot;484&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;iframe src=&quot;https://www.youtube.com/embed/xatt7AQPnMc&quot; width=&quot;860&quot; height=&quot;484&quot; frameborder=&quot;&quot; allowfullscreen=&quot;true&quot;&gt;&lt;/iframe&gt;
&lt;figcaption&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;4) 1930년 미국: 미국 경제 대공황&lt;/span&gt;&lt;/h4&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;늘어난 공장&lt;/b&gt;을 통해 &lt;b&gt;공급&lt;/b&gt;이 &lt;b&gt;수요&lt;/b&gt;를 &lt;b&gt;앞서&lt;/b&gt;게 되자 기업가 또는 &lt;b&gt;투자자&lt;/b&gt;들의 &lt;b&gt;이윤&lt;/b&gt;이 &lt;b&gt;줄어&lt;/b&gt;들게 되었습니다.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그러자 &lt;b&gt;노동자&lt;/b&gt;들을 &lt;b&gt;해고&lt;/b&gt;시켜 노동임금을 줄이고&lt;b&gt; 이윤&lt;/b&gt;을 &lt;b&gt;증가&lt;/b&gt;시키려고 했습니다.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;하지만, &lt;b&gt;노동자&lt;/b&gt;들이 곧 &lt;b&gt;수요자&lt;/b&gt;였기 때문에&lt;b&gt; 미국 경제&lt;/b&gt;는 &lt;b&gt;공장도 멈추고 대규모 실업자&lt;/b&gt;가 생겨나게 됩니다.&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(&amp;darr;&amp;darr;&amp;darr;1930년대 미국 대공황을 잘 설명한 영상&amp;darr;&amp;darr;&amp;darr;)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=1W3_kkmHR5I&quot;&gt;https://www.youtube.com/watch?v=1W3_kkmHR5I&lt;/a&gt;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;figure data-ke-type=&quot;video&quot; data-ke-style=&quot;alignCenter&quot; data-video-host=&quot;youtube&quot; data-video-url=&quot;https://www.youtube.com/watch?v=1W3_kkmHR5I&quot; data-video-thumbnail=&quot;https://scrap.kakaocdn.net/dn/bLGi01/hyK3P5tP4q/vGu46Kg82EJuSMoKCf2tIK/img.jpg?width=640&amp;amp;height=480&amp;amp;face=0_0_640_480&quot; data-video-width=&quot;640&quot; data-video-height=&quot;480&quot; data-video-origin-width=&quot;640&quot; data-video-origin-height=&quot;480&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;iframe src=&quot;https://www.youtube.com/embed/1W3_kkmHR5I&quot; width=&quot;640&quot; height=&quot;480&quot; frameborder=&quot;&quot; allowfullscreen=&quot;true&quot;&gt;&lt;/iframe&gt;
&lt;figcaption&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;5)&amp;nbsp;1930년 미국: 미국 경제 대공황&lt;/span&gt;&lt;/h4&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;1933년 &lt;b&gt;루즈벨트&lt;/b&gt; 대통령은 &lt;b&gt;미국&lt;/b&gt;의&lt;b&gt; 경제 대공황&lt;/b&gt;을 &lt;b&gt;극복&lt;/b&gt;하고자 &lt;b&gt;뉴딜 정&lt;/b&gt;책을 실시합니다.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;뉴딜 정책중 하나가 &lt;b&gt;대규모 토목 사업&lt;/b&gt;인 &lt;b&gt;후버 댐&lt;/b&gt; 건설입니다.&lt;/span&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;span style=&quot;color: #202122; font-family: 'Noto Sans Light';&quot;&gt;서부개척과 &lt;b&gt;이민자&lt;/b&gt;들의 유입으로 &lt;b&gt;서부지역&lt;/b&gt;의 &lt;b&gt;환경&lt;/b&gt;을 &lt;b&gt;개선&lt;/b&gt;할 필요가 있었습니다.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color: #202122; font-family: 'Noto Sans Light';&quot;&gt;하지만, 특히 후버 댐이 건설되기 전까지 &lt;b&gt;홍수 또는 가뭄&lt;/b&gt;에 의해 적절한 농업을 할 수 없게 되었죠.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #202122;&quot;&gt;그래서 &lt;b&gt;후버 대통령&lt;/b&gt; (&lt;span style=&quot;color: #222222;&quot;&gt;1929년 3월 4일 ~ 1933년 3월 4일) 은 &lt;span style=&quot;color: #202122;&quot;&gt;1931년에&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;color: #202122;&quot;&gt;농장을 현대화하고 전력을 공급하고 홍수를 억제하기 위해 &lt;b&gt;엄청난 규모의 댐 공사&lt;/b&gt;를 추진합니다. 그리고, &lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;color: #202122;&quot;&gt;&lt;span style=&quot;color: #222222;&quot;&gt;&lt;span style=&quot;color: #202122;&quot;&gt;자신의 이름을 따 댐의 일음을 후버 댐이라고 명명합니다.&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color: #202122; font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #222222;&quot;&gt;&lt;span style=&quot;color: #202122;&quot;&gt;&lt;b&gt;루즈벨트&lt;/b&gt; 대통령은 본래 하고 있던 &lt;b&gt;후버 댐 사업을 이어받아&lt;/b&gt; &lt;span style=&quot;color: #202122;&quot;&gt;1935년에 준공(=공사를 마침)을 선언합니다.&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;결국 &lt;b&gt;뉴딜 정책&lt;/b&gt;이란 아래와 같이 정리할 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;i&gt;&lt;b&gt;&quot;경제 위기를 극복하고자 정부에서 지원한 대규모 공공사업&quot;&lt;/b&gt;&lt;/i&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;&lt;span style=&quot;font-family: AppleSDGothicNeo-Regular, 'Malgun Gothic', '맑은 고딕', dotum, 돋움, sans-serif;&quot;&gt;2. 한국판 뉴딜은 무엇인가요?&lt;/span&gt;&lt;/h3&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;2020.04.22&lt;/b&gt;일 &lt;b&gt;청와대&lt;/b&gt; 제5차 비상경제회의에서 정부가 &lt;b&gt;국가 프로젝트로&lt;/b&gt;써 &lt;b&gt;한국판 뉴딜&lt;/b&gt;을 &lt;b&gt;구상&lt;/b&gt;하겠다는 &lt;b&gt;의사&lt;/b&gt;를 처음 밝힙니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;코로나와 일자리 문제로 인한 경제 위기를 극복하고자 정부에서 대규모 공공사업을 진행하려고 한 것이죠.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그리고 &lt;b&gt;2020.07.14&lt;/b&gt;일에 &lt;b&gt;'한국판 뉴딜'&lt;/b&gt; &lt;b&gt;국민보고대회&lt;/b&gt;를 통해 1시간 정도 &lt;b&gt;'한국판 뉴딜' 정책 설명&lt;/b&gt;을 진행합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(&amp;darr;&amp;darr;&amp;darr;'한국판 뉴딜' 국민보고대회 관련 뉴스 (풀영상은 따로 검색하면 1시간 짜리 영상이 나옵니다&amp;darr;&amp;darr;&amp;darr;)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=8baEZB88dNY&quot;&gt;https://www.youtube.com/watch?v=8baEZB88dNY&lt;/a&gt;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;figure data-ke-type=&quot;video&quot; data-ke-style=&quot;alignCenter&quot; data-video-host=&quot;youtube&quot; data-video-url=&quot;https://www.youtube.com/watch?v=8baEZB88dNY&quot; data-video-thumbnail=&quot;https://scrap.kakaocdn.net/dn/b7OUpo/hyK3KQ3F5j/V35NHsqWP3caPoffS8Lwk0/img.jpg?width=1280&amp;amp;height=720&amp;amp;face=518_154_708_360&quot; data-video-width=&quot;860&quot; data-video-height=&quot;484&quot; data-video-origin-width=&quot;860&quot; data-video-origin-height=&quot;484&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;iframe src=&quot;https://www.youtube.com/embed/8baEZB88dNY&quot; width=&quot;860&quot; height=&quot;484&quot; frameborder=&quot;&quot; allowfullscreen=&quot;true&quot;&gt;&lt;/iframe&gt;
&lt;figcaption&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;'한국판 뉴딜' 정책&lt;/b&gt;은 크게 &lt;b&gt;두 가지 사업(정책)&lt;/b&gt;으로 구성되어 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;디지털 뉴딜&lt;/span&gt;&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그린 뉴딜&lt;/span&gt;&lt;/b&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;2020.07.14일&lt;/b&gt; 발표한 &lt;b&gt;'한국판 뉴딜 1.0&lt;/b&gt;'에서는 &lt;b&gt;2025년까지 국고 114조원, 민간과 지자체 포함 160조원을 투자&lt;/b&gt;하여 관련 일자리를 창출할 것이라고 밝혔습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;'한국판 뉴딜 1.0'&lt;/b&gt;의 &lt;b&gt;방향성&lt;/b&gt;은 아래 &lt;b&gt;&quot;그림1&quot;&lt;/b&gt;과 같이 '&lt;b&gt;10대 대표과제'&lt;/b&gt;를 통해 파악할 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;700&quot; data-origin-height=&quot;803&quot; data-filename=&quot;한국판뉴딜_1_1_3.jpg&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bWHEGD/btraImNLXzm/cFIJxHsOq5tsSNc3ptp4tk/img.jpg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bWHEGD/btraImNLXzm/cFIJxHsOq5tsSNc3ptp4tk/img.jpg&quot; data-alt=&quot;그림1. 이미지 출처:&amp;amp;amp;nbsp;https://www.korea.kr/special/policyCurationView.do?newsId=148874860&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bWHEGD/btraImNLXzm/cFIJxHsOq5tsSNc3ptp4tk/img.jpg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbWHEGD%2FbtraImNLXzm%2FcFIJxHsOq5tsSNc3ptp4tk%2Fimg.jpg&quot; data-origin-width=&quot;700&quot; data-origin-height=&quot;803&quot; data-filename=&quot;한국판뉴딜_1_1_3.jpg&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;그림1. 이미지 출처:&amp;nbsp;https://www.korea.kr/special/policyCurationView.do?newsId=148874860&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;a href=&quot;https://www.korea.kr/special/policyCurationView.do?newsId=148874860&quot;&gt;https://www.korea.kr/special/policyCurationView.do?newsId=148874860&lt;/a&gt;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1627562570090&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;website&quot; data-og-title=&quot;[정책위키] 한눈에 보는 정책 - 한국판 뉴딜&quot; data-og-description=&quot;1. 한국판 뉴딜이란?2.한국판 뉴딜의 구조와 추진체계3.분야별 주요 내용4.한국판 뉴딜 주요 추진과제5.한국판 뉴딜 펀드 6.사례로 본 한국판 뉴딜 7.그 밖의 참고자료 / 누리집 1. 한국판 뉴딜이란?&quot; data-og-host=&quot;www.korea.kr&quot; data-og-source-url=&quot;https://www.korea.kr/special/policyCurationView.do?newsId=148874860&quot; data-og-url=&quot;https://www.korea.kr/special/policyCurationView.do?newsId=148874860&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/EodRx/hyK2BVmO7v/JogssMVLKXEtMiyZnKEVf0/img.jpg?width=300&amp;amp;height=300&amp;amp;face=0_0_300_300,https://scrap.kakaocdn.net/dn/LQjMK/hyK3OZP313/vqwxufNe05xyJOVDvVX50k/img.jpg?width=300&amp;amp;height=300&amp;amp;face=0_0_300_300,https://scrap.kakaocdn.net/dn/dnk5UX/hyK3N7HeMX/PLaMh7vQ58h1pSPIO62eNk/img.jpg?width=2001&amp;amp;height=2000&amp;amp;face=0_0_2001_2000&quot;&gt;&lt;a href=&quot;https://www.korea.kr/special/policyCurationView.do?newsId=148874860&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://www.korea.kr/special/policyCurationView.do?newsId=148874860&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/EodRx/hyK2BVmO7v/JogssMVLKXEtMiyZnKEVf0/img.jpg?width=300&amp;amp;height=300&amp;amp;face=0_0_300_300,https://scrap.kakaocdn.net/dn/LQjMK/hyK3OZP313/vqwxufNe05xyJOVDvVX50k/img.jpg?width=300&amp;amp;height=300&amp;amp;face=0_0_300_300,https://scrap.kakaocdn.net/dn/dnk5UX/hyK3N7HeMX/PLaMh7vQ58h1pSPIO62eNk/img.jpg?width=2001&amp;amp;height=2000&amp;amp;face=0_0_2001_2000');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;[정책위키] 한눈에 보는 정책 - 한국판 뉴딜&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;1. 한국판 뉴딜이란?2.한국판 뉴딜의 구조와 추진체계3.분야별 주요 내용4.한국판 뉴딜 주요 추진과제5.한국판 뉴딜 펀드 6.사례로 본 한국판 뉴딜 7.그 밖의 참고자료 / 누리집 1. 한국판 뉴딜이란?&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;www.korea.kr&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그리고 1년 뒤 &lt;b&gt;2021.07.14일&lt;/b&gt;에&lt;b&gt; '한국판 뉴딜 2.0'&lt;/b&gt;을 발표하게 됩니다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;'한국판 뉴딜1.0'의 방향성을 그대로 가져가면서 &lt;b&gt;투자액&lt;/b&gt;을 좀 더 &lt;b&gt;증액&lt;/b&gt;할 것이라고 합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=ap0Wz5CTMOM&quot;&gt;https://www.youtube.com/watch?v=ap0Wz5CTMOM&lt;/a&gt;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;figure data-ke-type=&quot;video&quot; data-ke-style=&quot;alignCenter&quot; data-video-host=&quot;youtube&quot; data-video-url=&quot;https://www.youtube.com/watch?v=ap0Wz5CTMOM&quot; data-video-thumbnail=&quot;https://scrap.kakaocdn.net/dn/csU8cu/hyK3E4pfFe/mxQa00hxq4KaGFcFuJre2K/img.jpg?width=480&amp;amp;height=360&amp;amp;face=205_101_270_172&quot; data-video-width=&quot;480&quot; data-video-height=&quot;360&quot; data-video-origin-width=&quot;480&quot; data-video-origin-height=&quot;360&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;iframe src=&quot;https://www.youtube.com/embed/ap0Wz5CTMOM&quot; width=&quot;480&quot; height=&quot;360&quot; frameborder=&quot;&quot; allowfullscreen=&quot;true&quot;&gt;&lt;/iframe&gt;
&lt;figcaption&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;앞서 언급했듯이 &lt;b&gt;'한국판 뉴딜'&lt;/b&gt; 정책은 크게&lt;b&gt; 두 가지&lt;/b&gt;로 분류 할 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;디지털 뉴딜&lt;/span&gt;&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그린 뉴딜&lt;/span&gt;&lt;/b&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;앞서 &lt;b&gt;디지털 뉴딜&lt;/b&gt;에 대해서 알아보기로 했으니, 지금부터 &lt;b&gt;디지털 뉴딜&lt;/b&gt;에 대해 좀 더 자세히 설명드리도록 하겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;&lt;span style=&quot;font-family: AppleSDGothicNeo-Regular, 'Malgun Gothic', '맑은 고딕', dotum, 돋움, sans-serif;&quot;&gt;3. 디지털 뉴딜은 무엇인가요?&lt;/span&gt;&lt;/h3&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;'한국판 뉴딜' 중 하나의 중심축을 담당하는 &lt;b&gt;디지털 뉴딜&lt;/b&gt;은 크게 &lt;b&gt;세 가지 방향성&lt;/b&gt;을 갖고 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;데이터 댐&lt;/span&gt;&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;지능형 정부&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;스마트 의료인프라&lt;/span&gt;&lt;/b&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;700&quot; data-origin-height=&quot;803&quot; data-filename=&quot;한국판뉴딜_1_1_3.jpg&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bWHEGD/btraImNLXzm/cFIJxHsOq5tsSNc3ptp4tk/img.jpg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bWHEGD/btraImNLXzm/cFIJxHsOq5tsSNc3ptp4tk/img.jpg&quot; data-alt=&quot;그림1. 이미지 출처:&amp;amp;amp;nbsp;https://www.korea.kr/special/policyCurationView.do?newsId=148874860&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bWHEGD/btraImNLXzm/cFIJxHsOq5tsSNc3ptp4tk/img.jpg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbWHEGD%2FbtraImNLXzm%2FcFIJxHsOq5tsSNc3ptp4tk%2Fimg.jpg&quot; data-origin-width=&quot;700&quot; data-origin-height=&quot;803&quot; data-filename=&quot;한국판뉴딜_1_1_3.jpg&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;그림1. 이미지 출처:&amp;nbsp;https://www.korea.kr/special/policyCurationView.do?newsId=148874860&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;여기서 제가 소개해 드릴 부분은 &lt;b&gt;&quot;데이터 댐&quot;&lt;/b&gt;과 &lt;b&gt;&quot;스마트 의료인프라&quot;&lt;/b&gt;입니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;span style=&quot;font-family: AppleSDGothicNeo-Regular, 'Malgun Gothic', '맑은 고딕', dotum, 돋움, sans-serif;&quot;&gt;&lt;b&gt;3-1. 데이터 댐&lt;/b&gt;&lt;/span&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;오늘 날 적용하는 인공지능의 대표격인 &lt;b&gt;딥러닝&lt;/b&gt;은 크게 &lt;b&gt;두 가지 요소&lt;/b&gt;를 필요로 합니다.&lt;/span&gt;&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;데이터&lt;/span&gt;&lt;/b&gt;&lt;/li&gt;
&lt;li&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;딥러닝 모델 with 학습 방법론&lt;/span&gt;&lt;/b&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;학계&lt;/b&gt;에서 열심히 연구하는 부분들은 &quot;&lt;b&gt;2. 딥러닝 모델 with 학습 방법론&quot;&lt;/b&gt;입니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;수 많은 연구자들이 어떻게 하면 효율적인 딥러닝 모델을 만들 수 있을지 연구하여 논문으로 발표합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;하지만, 이러한 &lt;b&gt;딥러닝 연구&lt;/b&gt;가 진행되기 위해서 반드시 &lt;b&gt;선행&lt;/b&gt;되어야 하는 &lt;b&gt;작업&lt;/b&gt;이 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;바로 딥러닝 모델을 학습 시키기 위한 &lt;b&gt;'데이터'&lt;/b&gt;를 &lt;b&gt;만드는 작업&lt;/b&gt;입니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;결국 &lt;b&gt;양질의 '데이터'&lt;/b&gt;가 있어야 &lt;b&gt;효율적인 인공지능 연구&lt;/b&gt;들이 가능합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그래서 정부는 &lt;b&gt;디지털 뉴딜&lt;/b&gt;의 일환으로 데이터들을 한 곳에 저장 시킬 &lt;b&gt;&quot;데이터 댐&quot;&lt;/b&gt;을 만들기로 합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;정부는 &lt;b&gt;데이터 댐&lt;/b&gt;에 관련 이야기를 언급하면서 &lt;b&gt;&quot;D.N.A 생태계 강화&quot;&lt;/b&gt;를 강조했습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;'D.N.A'&lt;/b&gt;는 &lt;b&gt;'Data. Network. AI'&lt;/b&gt; 약자인데, 데이터들을 하나의 저장소(=데이터 댐)에 모아 5G 기반의 통신만 기술을 이용해 AI 관련 사업하는 사람들에게 제공하면 어떻게 될까요?&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;위와 같이 공공 데이터 저장소가 있으면 많은 &lt;b&gt;인공지능 기업과 연구자&lt;/b&gt;들이 데이터를 모으는 수고를 덜 면서 &lt;b&gt;연구&lt;/b&gt;에만 &lt;b&gt;몰두&lt;/b&gt; 할 수 있게 됩니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(&amp;darr;&amp;darr;&amp;darr;'디지털 뉴딜'에서 데이터 댐에 관한 설명&amp;darr;&amp;darr;&amp;darr;)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=9iNjaG72SSo&quot;&gt;https://www.youtube.com/watch?v=9iNjaG72SSo&lt;/a&gt;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;figure data-ke-type=&quot;video&quot; data-ke-style=&quot;alignCenter&quot; data-video-host=&quot;youtube&quot; data-video-url=&quot;https://www.youtube.com/watch?v=9iNjaG72SSo&quot; data-video-thumbnail=&quot;https://scrap.kakaocdn.net/dn/7COfX/hyK3OlHnWi/DzYWBGJkXUKRm1dcUXyhUk/img.jpg?width=1280&amp;amp;height=720&amp;amp;face=236_154_816_320&quot; data-video-width=&quot;860&quot; data-video-height=&quot;484&quot; data-video-origin-width=&quot;860&quot; data-video-origin-height=&quot;484&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;iframe src=&quot;https://www.youtube.com/embed/9iNjaG72SSo&quot; width=&quot;860&quot; height=&quot;484&quot; frameborder=&quot;&quot; allowfullscreen=&quot;true&quot;&gt;&lt;/iframe&gt;
&lt;figcaption&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(&amp;darr;&amp;darr;&amp;darr;'데이터 댐' 관련 설명&amp;darr;&amp;darr;&amp;darr;)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=EWIq6aWxbdg&quot;&gt;https://www.youtube.com/watch?v=EWIq6aWxbdg&lt;/a&gt;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;figure data-ke-type=&quot;video&quot; data-ke-style=&quot;alignCenter&quot; data-video-host=&quot;youtube&quot; data-video-url=&quot;https://www.youtube.com/watch?v=EWIq6aWxbdg&quot; data-video-thumbnail=&quot;https://scrap.kakaocdn.net/dn/KPKfw/hyK3LB2Zro/oVHLjHPkqJO4B6ljSR74sK/img.jpg?width=1280&amp;amp;height=720&amp;amp;face=718_110_780_178&quot; data-video-width=&quot;860&quot; data-video-height=&quot;484&quot; data-video-origin-width=&quot;860&quot; data-video-origin-height=&quot;484&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;iframe src=&quot;https://www.youtube.com/embed/EWIq6aWxbdg&quot; width=&quot;860&quot; height=&quot;484&quot; frameborder=&quot;&quot; allowfullscreen=&quot;true&quot;&gt;&lt;/iframe&gt;
&lt;figcaption&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;span style=&quot;font-family: AppleSDGothicNeo-Regular, 'Malgun Gothic', '맑은 고딕', dotum, 돋움, sans-serif;&quot;&gt;&lt;b&gt;3-2. 스마트 의료인프라&lt;/b&gt;&lt;/span&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;의료&lt;/b&gt;가&lt;b&gt; 인공지능 및 IT&lt;/b&gt;와 &lt;b&gt;결합&lt;/b&gt;하면서 &lt;b&gt;개선&lt;/b&gt;될 수 있는 부분은 굉장히 많습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;몇 가지 예를 들면, 웨어러블 디바이스 또는 실시간 감지를 통해 사람의 생처정보 및 행동패턴을 분석하여 환자들을 지속 관찰할 수 도 있고, 성능 좋은 의료인공지능 모델로 인해 정확성 높고 더 빠른 진료를 가능하게 할 수도 있습니다. 또한 원격의료 시스템을 지원할 수 도 있고, 불필요한 중간체계를 줄여주어 의료비용을 감소시킬 수 도 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(&amp;darr;&amp;darr;&amp;darr;디지털 헬스케어와 관련해서 작성한 글&amp;darr;&amp;darr;&amp;darr;)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;a href=&quot;https://89douner.tistory.com/195?category=986138&quot;&gt;https://89douner.tistory.com/195?category=986138&lt;/a&gt;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1627619950954&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;1. 미국의 Healthcare환경 및 의료시스템&quot; data-og-description=&quot;안녕하세요. 이번 글에서는 미국의 healthcare환경 및 의료 시스템에 대해서 알아보도록 하겠습니다. 사실, 각 나라마다 의료환경이 다릅니다. 예를 들어, 유럽의 의료환경, 한국의 의료환경, 미국&quot; data-og-host=&quot;89douner.tistory.com&quot; data-og-source-url=&quot;https://89douner.tistory.com/195?category=986138&quot; data-og-url=&quot;https://89douner.tistory.com/195&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/vtV4G/hyK3IeH1l0/EdhesYFJcslLAQzuXjzLO1/img.jpg?width=566&amp;amp;height=207&amp;amp;face=0_0_566_207,https://scrap.kakaocdn.net/dn/wohuq/hyK3N1o6Mm/hX5Jz7bKsUgKAdEyo8ykC1/img.jpg?width=566&amp;amp;height=207&amp;amp;face=0_0_566_207,https://scrap.kakaocdn.net/dn/ec9EPY/hyK3BzSZiz/ywigdkBvKg26lGfqpEK2d0/img.jpg?width=566&amp;amp;height=207&amp;amp;face=0_0_566_207&quot;&gt;&lt;a href=&quot;https://89douner.tistory.com/195?category=986138&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://89douner.tistory.com/195?category=986138&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/vtV4G/hyK3IeH1l0/EdhesYFJcslLAQzuXjzLO1/img.jpg?width=566&amp;amp;height=207&amp;amp;face=0_0_566_207,https://scrap.kakaocdn.net/dn/wohuq/hyK3N1o6Mm/hX5Jz7bKsUgKAdEyo8ykC1/img.jpg?width=566&amp;amp;height=207&amp;amp;face=0_0_566_207,https://scrap.kakaocdn.net/dn/ec9EPY/hyK3BzSZiz/ywigdkBvKg26lGfqpEK2d0/img.jpg?width=566&amp;amp;height=207&amp;amp;face=0_0_566_207');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;1. 미국의 Healthcare환경 및 의료시스템&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;안녕하세요. 이번 글에서는 미국의 healthcare환경 및 의료 시스템에 대해서 알아보도록 하겠습니다. 사실, 각 나라마다 의료환경이 다릅니다. 예를 들어, 유럽의 의료환경, 한국의 의료환경, 미국&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;89douner.tistory.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(&amp;darr;&amp;darr;&amp;darr;디지털 뉴딜 사업 중 &quot;스마트 의료인프라&quot;관련 영상&amp;darr;&amp;darr;&amp;darr;)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=d2d6-hRXXg4&quot;&gt;https://www.youtube.com/watch?v=d2d6-hRXXg4&lt;/a&gt;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;figure data-ke-type=&quot;video&quot; data-ke-style=&quot;alignCenter&quot; data-video-host=&quot;youtube&quot; data-video-url=&quot;https://www.youtube.com/watch?v=d2d6-hRXXg4&quot; data-video-thumbnail=&quot;https://scrap.kakaocdn.net/dn/hayfy/hyK3O0iGdQ/VqwxoryaOKB3YZ7AuoqnXk/img.jpg?width=1280&amp;amp;height=720&amp;amp;face=142_170_1072_510&quot; data-video-width=&quot;860&quot; data-video-height=&quot;484&quot; data-video-origin-width=&quot;860&quot; data-video-origin-height=&quot;484&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;iframe src=&quot;https://www.youtube.com/embed/d2d6-hRXXg4&quot; width=&quot;860&quot; height=&quot;484&quot; frameborder=&quot;&quot; allowfullscreen=&quot;true&quot;&gt;&lt;/iframe&gt;
&lt;figcaption&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;결국 의료분야에서 인공지능 모델을 이용하려는 연구자들은 위에서 언급한 '데이터 댐', '스마트 의료인프라'와 모두 관련있는 일을 하고 있으시다고 보셔도 될 것 같습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;&lt;span style=&quot;font-family: AppleSDGothicNeo-Regular, 'Malgun Gothic', '맑은 고딕', dotum, 돋움, sans-serif;&quot;&gt;4. AI hub는 무엇인가요?&lt;/span&gt;&lt;/h3&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;만약 본인이 의료 인공지능을 연구하고 있다고 하신다면 연구를 어떻게 하고 있으신가요?&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;딥러닝&lt;/b&gt; 모델, 그 외 인공지능 학습방법론 등과 관련된 분야들 (ex: unsupervised learning, domain adaptation, CNN, GNN, etc ...) 을 연구하시고 있으실텐데, 이러한 &lt;b&gt;연구&lt;/b&gt;를 하기 위해서는 &lt;b&gt;데이터&lt;/b&gt;가 &lt;b&gt;필요&lt;/b&gt;하실 겁니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;보통 &lt;b&gt;의료&lt;/b&gt; 분야에서&lt;b&gt; 인공지능&lt;/b&gt; 모델을 연구하기 위한 방법은 두 가지 입니다.&lt;/span&gt;&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;사설 병원 (ex: 서울아산병원, 삼성서울병원, 서울대병원, 연세대병원, 고려대 병원 등..)의&lt;b&gt; private dataset&lt;/b&gt;을 이용한 연구 진행 방식&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;Kaggle, MICCAI 에서 개최하는 challenge에서 제공해주는 &lt;b&gt;public dataset&lt;/b&gt;을 이용한 연구 진행 방식&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;Private dataset&lt;/b&gt;은 &lt;b&gt;양질&lt;/b&gt;의 데이터이지만 일반 연구자들이 &lt;b&gt;접근&lt;/b&gt;하기가 &lt;b&gt;쉽지 않고&lt;/b&gt;, &lt;b&gt;public dataset&lt;/b&gt;의 경우에는 &lt;b&gt;접근성&lt;/b&gt;은 &lt;b&gt;좋지&lt;/b&gt;만&lt;b&gt; 품질&lt;/b&gt;이 &lt;b&gt;떨어지&lt;/b&gt;는 경우가 종종 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그래서 &lt;b&gt;정부&lt;/b&gt;는&lt;b&gt; 양질의 데이터를 공공 개방&lt;/b&gt;할 수 있도록&lt;b&gt; '데이터 댐'&lt;/b&gt;을 만들기로 했는데, 이를 &lt;b&gt;대표하는 곳&lt;/b&gt;이 &lt;b&gt;AI-hub&lt;/b&gt;라는 곳입니다. (미리 말씀 드리면 의료 데이터는 다른 데이터들에 비해 데이터를 신청하는 것이 까다롭습니다. 아무래도 사람과 관련된 데이터라 엄격할 수 밖에 없겠죠? 하지만, private dataset은 보통 해당 기관 사람 아니면 접근이 거의 불가능 한 점을 보면 상대적으로 이용하기 수월하다고도 할 수 있을 것 같습니다)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(&amp;darr;&amp;darr;&amp;darr; AI-hub 사업 설명 영상&amp;darr;&amp;darr;&amp;darr;)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=sEFPh1f63aE&quot;&gt;https://www.youtube.com/watch?v=sEFPh1f63aE&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;figure data-ke-type=&quot;video&quot; data-ke-style=&quot;alignCenter&quot; data-video-host=&quot;youtube&quot; data-video-url=&quot;https://www.youtube.com/watch?v=sEFPh1f63aE&quot; data-video-thumbnail=&quot;https://scrap.kakaocdn.net/dn/LHWCd/hyK2DyNfRf/IVJ4BT1gPy3eZdDbm0jOy1/img.jpg?width=1280&amp;amp;height=720&amp;amp;face=0_0_1280_720&quot; data-video-width=&quot;860&quot; data-video-height=&quot;484&quot; data-video-origin-width=&quot;860&quot; data-video-origin-height=&quot;484&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;iframe src=&quot;https://www.youtube.com/embed/sEFPh1f63aE&quot; width=&quot;860&quot; height=&quot;484&quot; frameborder=&quot;&quot; allowfullscreen=&quot;true&quot;&gt;&lt;/iframe&gt;
&lt;figcaption&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(&amp;darr;&amp;darr;&amp;darr; AI-hub 사이트&amp;darr;&amp;darr;&amp;darr;)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;a href=&quot;https://aihub.or.kr/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://aihub.or.kr/&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1627620735763&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;website&quot; data-og-title=&quot;홈 | AI 허브&quot; data-og-description=&quot;AI 데이터를 찾으시나요? AI 학습에 필요한 다양한 데이터를 제공합니다. 원하시는 분야를 선택해 보세요.&quot; data-og-host=&quot;aihub.or.kr&quot; data-og-source-url=&quot;https://aihub.or.kr/&quot; data-og-url=&quot;https://aihub.or.kr/&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/netXp/hyK3KQ8KAk/KIuvK1EcAEqQFh5pi9oqVk/img.png?width=348&amp;amp;height=282&amp;amp;face=0_0_348_282,https://scrap.kakaocdn.net/dn/bvAaWa/hyK3CFy6CP/KKmSN9BcTMbuoweNNYXOT1/img.png?width=348&amp;amp;height=282&amp;amp;face=0_0_348_282&quot;&gt;&lt;a href=&quot;https://aihub.or.kr/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://aihub.or.kr/&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/netXp/hyK3KQ8KAk/KIuvK1EcAEqQFh5pi9oqVk/img.png?width=348&amp;amp;height=282&amp;amp;face=0_0_348_282,https://scrap.kakaocdn.net/dn/bvAaWa/hyK3CFy6CP/KKmSN9BcTMbuoweNNYXOT1/img.png?width=348&amp;amp;height=282&amp;amp;face=0_0_348_282');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;홈 | AI 허브&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;AI 데이터를 찾으시나요? AI 학습에 필요한 다양한 데이터를 제공합니다. 원하시는 분야를 선택해 보세요.&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;aihub.or.kr&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;위의 사이트에 접속하시면 아래와 같은 화면을 보실 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;굉장히 다양한 데이터들 종류가 있는데, 이 중에 헬스케어와 관련된 데이터들은 무엇이 있는지 살펴보도록 하겠습니다 (아래 '헬스케어' 부분을 클릭하시면 됩니다)&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1407&quot; data-origin-height=&quot;961&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/daeTEO/btraRwnQu4k/ycRyVdqdeamalNK9BSYcy1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/daeTEO/btraRwnQu4k/ycRyVdqdeamalNK9BSYcy1/img.png&quot; data-alt=&quot;그림2&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/daeTEO/btraRwnQu4k/ycRyVdqdeamalNK9BSYcy1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdaeTEO%2FbtraRwnQu4k%2FycRyVdqdeamalNK9BSYcy1%2Fimg.png&quot; data-origin-width=&quot;1407&quot; data-origin-height=&quot;961&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;그림2&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;헬스케어 데이터들을 살펴보니 '이미지', '비디오', '오디오', '3D', '센서' 등 다양한 의료데이터들이 있네요.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;이 중에서 '간암 진단 의료 영상'을 클릭해서 보도록 하겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1224&quot; data-origin-height=&quot;704&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/oDH9h/btraNIvObfh/hyVGkaHNtQHHKpOJ0pNsQ1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/oDH9h/btraNIvObfh/hyVGkaHNtQHHKpOJ0pNsQ1/img.png&quot; data-alt=&quot;그림3&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/oDH9h/btraNIvObfh/hyVGkaHNtQHHKpOJ0pNsQ1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FoDH9h%2FbtraNIvObfh%2FhyVGkaHNtQHHKpOJ0pNsQ1%2Fimg.png&quot; data-origin-width=&quot;1224&quot; data-origin-height=&quot;704&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;그림3&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;'간암 진단 의료 영상' 데이터와 관련된 정보들이 보이네요.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;아래 &quot;그림4&quot;에서 우측 하단을 부분에서 교육활용동영상이라는 부분이 있는데, 해당 영상을 클릭하면 데이터에 대한 설명을 유튜브 영상으로 보여줍니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1244&quot; data-origin-height=&quot;960&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bIgU4V/btraRxtxcWt/kES15oQXUKxeMVyAXzz4C1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bIgU4V/btraRxtxcWt/kES15oQXUKxeMVyAXzz4C1/img.png&quot; data-alt=&quot;그림4&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bIgU4V/btraRxtxcWt/kES15oQXUKxeMVyAXzz4C1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbIgU4V%2FbtraRxtxcWt%2FkES15oQXUKxeMVyAXzz4C1%2Fimg.png&quot; data-origin-width=&quot;1244&quot; data-origin-height=&quot;960&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;그림4&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(&amp;darr;&amp;darr;&amp;darr;간암 진단 의료 영상 관련 교육활용동영상&amp;darr;&amp;darr;&amp;darr;)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=QQXQhWsy2YE&quot;&gt;https://www.youtube.com/watch?v=QQXQhWsy2YE&lt;/a&gt;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;figure data-ke-type=&quot;video&quot; data-ke-style=&quot;alignCenter&quot; data-video-host=&quot;youtube&quot; data-video-url=&quot;https://www.youtube.com/watch?v=QQXQhWsy2YE&quot; data-video-thumbnail=&quot;https://scrap.kakaocdn.net/dn/b55SCd/hyK3GBgFx1/v0keICUQe8tL9p1snyM9E1/img.jpg?width=480&amp;amp;height=360&amp;amp;face=0_0_480_360&quot; data-video-width=&quot;480&quot; data-video-height=&quot;360&quot; data-video-origin-width=&quot;480&quot; data-video-origin-height=&quot;360&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;iframe src=&quot;https://www.youtube.com/embed/QQXQhWsy2YE&quot; width=&quot;480&quot; height=&quot;360&quot; frameborder=&quot;&quot; allowfullscreen=&quot;true&quot;&gt;&lt;/iframe&gt;
&lt;figcaption&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;굉장히 다양한 기관에서 의료 데이터를 만들기 위해 참여했다는것을 확인할 수 있었습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;895&quot; data-origin-height=&quot;451&quot; data-filename=&quot;blob&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/boY1Ob/btraNIWSRjD/aGIG3CFt6EFQVwkCjpb121/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/boY1Ob/btraNIWSRjD/aGIG3CFt6EFQVwkCjpb121/img.png&quot; data-alt=&quot;그림5&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/boY1Ob/btraNIWSRjD/aGIG3CFt6EFQVwkCjpb121/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FboY1Ob%2FbtraNIWSRjD%2FaGIG3CFt6EFQVwkCjpb121%2Fimg.png&quot; data-origin-width=&quot;895&quot; data-origin-height=&quot;451&quot; data-filename=&quot;blob&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;그림5&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;데이터 형테는 DICOM, PNG, JSON 형식으로 제공되고, 폴더 트리에 대한 설명도 자세하게 적어놓은 것을 알 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;915&quot; data-origin-height=&quot;963&quot; data-filename=&quot;제목 없음2.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dah98Y/btraVpaF0J1/UjIkuhveAd0icBmaXVoSTK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dah98Y/btraVpaF0J1/UjIkuhveAd0icBmaXVoSTK/img.png&quot; data-alt=&quot;그림6&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dah98Y/btraVpaF0J1/UjIkuhveAd0icBmaXVoSTK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fdah98Y%2FbtraVpaF0J1%2FUjIkuhveAd0icBmaXVoSTK%2Fimg.png&quot; data-origin-width=&quot;915&quot; data-origin-height=&quot;963&quot; data-filename=&quot;제목 없음2.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;그림6&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(&amp;darr;&amp;darr;&amp;darr;DICOM 파일 관련 설명&amp;darr;&amp;darr;&amp;darr;)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;a href=&quot;https://89douner.tistory.com/293&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://89douner.tistory.com/293&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1627622082225&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;3-1. DICOM 파일이란? (Feat. Definition, PACS, digital image 습득과정)&quot; data-og-description=&quot;안녕하세요. 이번 글에서는 의료 영상(Medical imaging)에서 사용하는 DICOM 파일이 어떻게 생겨났는지, 어떻게 이용되고 있는지 알아보도록 하겠습니다. 또한 DICOM이라는 것이 digital image이기 때문에, &quot; data-og-host=&quot;89douner.tistory.com&quot; data-og-source-url=&quot;https://89douner.tistory.com/293&quot; data-og-url=&quot;https://89douner.tistory.com/293&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/bQFI67/hyK3KXXdph/FaR8ZUgZn7O3SpG8k5PVmk/img.jpg?width=259&amp;amp;height=195&amp;amp;face=0_0_259_195,https://scrap.kakaocdn.net/dn/bqZxay/hyK3LP50jQ/lpK7uQwgRwrF94NTVYbDmk/img.jpg?width=259&amp;amp;height=195&amp;amp;face=0_0_259_195,https://scrap.kakaocdn.net/dn/bGSfdw/hyK3B7LVf6/0Cja95wAjr9mVRLmNBFVAK/img.png?width=1465&amp;amp;height=520&amp;amp;face=0_0_1465_520&quot;&gt;&lt;a href=&quot;https://89douner.tistory.com/293&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://89douner.tistory.com/293&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/bQFI67/hyK3KXXdph/FaR8ZUgZn7O3SpG8k5PVmk/img.jpg?width=259&amp;amp;height=195&amp;amp;face=0_0_259_195,https://scrap.kakaocdn.net/dn/bqZxay/hyK3LP50jQ/lpK7uQwgRwrF94NTVYbDmk/img.jpg?width=259&amp;amp;height=195&amp;amp;face=0_0_259_195,https://scrap.kakaocdn.net/dn/bGSfdw/hyK3B7LVf6/0Cja95wAjr9mVRLmNBFVAK/img.png?width=1465&amp;amp;height=520&amp;amp;face=0_0_1465_520');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;3-1. DICOM 파일이란? (Feat. Definition, PACS, digital image 습득과정)&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;안녕하세요. 이번 글에서는 의료 영상(Medical imaging)에서 사용하는 DICOM 파일이 어떻게 생겨났는지, 어떻게 이용되고 있는지 알아보도록 하겠습니다. 또한 DICOM이라는 것이 digital image이기 때문에,&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;89douner.tistory.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그렇다면 이러한 데이터들을 어떻게 이용할 수 있을까요?&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;앞서 '그림3'에서 '간암 진단 의료 영상'을 클릭하고 들어가면 아래 화면이 나오는걸 '그림4'에서 확인하셨을 거에요. 이 화면에서 아래 &quot;그림7&quot;과 같이 '이용신청'을 클릭해줍니다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1281&quot; data-origin-height=&quot;1006&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cgmlHz/btraKEAXNTv/8ik2mF0uqKks9GZmdRTkMK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cgmlHz/btraKEAXNTv/8ik2mF0uqKks9GZmdRTkMK/img.png&quot; data-alt=&quot;그림7&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cgmlHz/btraKEAXNTv/8ik2mF0uqKks9GZmdRTkMK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcgmlHz%2FbtraKEAXNTv%2F8ik2mF0uqKks9GZmdRTkMK%2Fimg.png&quot; data-origin-width=&quot;1281&quot; data-origin-height=&quot;1006&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;그림7&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;아래 화면과 같이 데이터를 신청하는 절차가 나오네요.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;의료 데이터는 이용신청을 위해 좀 더 까다로운 절차가 진행됩니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;961&quot; data-origin-height=&quot;792&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bZlKMB/btraNIbxNpy/DvLM423BPWT9Jn7rhsHfGk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bZlKMB/btraNIbxNpy/DvLM423BPWT9Jn7rhsHfGk/img.png&quot; data-alt=&quot;그림8&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bZlKMB/btraNIbxNpy/DvLM423BPWT9Jn7rhsHfGk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbZlKMB%2FbtraNIbxNpy%2FDvLM423BPWT9Jn7rhsHfGk%2Fimg.png&quot; data-origin-width=&quot;961&quot; data-origin-height=&quot;792&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;그림8&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;위에서 설명하고 있는 절차중 'IRB'관련 내용은 아래 글을 참고해주시면 좋을 것 같습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;a href=&quot;https://89douner.tistory.com/295&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://89douner.tistory.com/295&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1627622961536&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;4. 한국에서 의료 데이터를 다루기 위한 행정절차&quot; data-og-description=&quot;안녕하세요. 이번 글에서는 한국에서 의료 데이터를 다루기 위한 행정적 절차에 대해 소개해드리려고 합니다. 1. Motivation (의료 데이터를 사용하는데 왜 행정적 절차가 필요할까요?) 딥러닝 모델&quot; data-og-host=&quot;89douner.tistory.com&quot; data-og-source-url=&quot;https://89douner.tistory.com/295&quot; data-og-url=&quot;https://89douner.tistory.com/295&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/c9mZij/hyK3EwEmNx/tFLNzSsAimsHSFxX7RWDwK/img.png?width=800&amp;amp;height=478&amp;amp;face=0_0_800_478,https://scrap.kakaocdn.net/dn/Rvaa7/hyK3LifD8V/ikTFTo2FRZYDpmOC4OLOX0/img.png?width=800&amp;amp;height=478&amp;amp;face=0_0_800_478,https://scrap.kakaocdn.net/dn/bJOK8v/hyK3QjAU3f/rbuXEB9k09lsmIi2gVkON1/img.png?width=1138&amp;amp;height=731&amp;amp;face=0_0_1138_731&quot;&gt;&lt;a href=&quot;https://89douner.tistory.com/295&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://89douner.tistory.com/295&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/c9mZij/hyK3EwEmNx/tFLNzSsAimsHSFxX7RWDwK/img.png?width=800&amp;amp;height=478&amp;amp;face=0_0_800_478,https://scrap.kakaocdn.net/dn/Rvaa7/hyK3LifD8V/ikTFTo2FRZYDpmOC4OLOX0/img.png?width=800&amp;amp;height=478&amp;amp;face=0_0_800_478,https://scrap.kakaocdn.net/dn/bJOK8v/hyK3QjAU3f/rbuXEB9k09lsmIi2gVkON1/img.png?width=1138&amp;amp;height=731&amp;amp;face=0_0_1138_731');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;4. 한국에서 의료 데이터를 다루기 위한 행정절차&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;안녕하세요. 이번 글에서는 한국에서 의료 데이터를 다루기 위한 행정적 절차에 대해 소개해드리려고 합니다. 1. Motivation (의료 데이터를 사용하는데 왜 행정적 절차가 필요할까요?) 딥러닝 모델&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;89douner.tistory.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;이외에 &quot;생명윤리준수서약서&quot;도 작성하여 제출해야 하니 참고하시면 될 것 같습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;지금까지 '한국판 뉴딜', '디지털 뉴딜', 'AI-hub'를 소개하면서 의료 인공지능을 하시는 분들이 어떻게 양질의 데이터를 신청하고 모을 수 있는지 알아보았습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;감사합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>Medical  AI research/Background</category>
      <category>AI 허브</category>
      <category>AI-hub</category>
      <category>뉴딜</category>
      <category>데이터 댐</category>
      <category>디지털 뉴딜</category>
      <category>생명윤리준수서약서</category>
      <category>한국판 뉴딜</category>
      <author>Do-Woo-Ner</author>
      <guid isPermaLink="true">https://89douner.tistory.com/307</guid>
      <comments>https://89douner.tistory.com/307#entry307comment</comments>
      <pubDate>Fri, 8 Oct 2021 17:16:20 +0900</pubDate>
    </item>
    <item>
      <title>5. 한국에서 의료 데이터를 다루기 위한 행정절차</title>
      <link>https://89douner.tistory.com/295</link>
      <description>&lt;p data-ke-size=&quot;size18&quot;&gt;안녕하세요.&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;이번 글에서는 한국에서 의료 데이터를 다루기 위한 행정적 절차에 대해 소개해드리려고 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;1. Motivation (의료 데이터를 사용하는데 왜 행정적 절차가 필요할까요?)&lt;/h3&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;딥러닝 모델을 이용해 Chest-Xray, CT 등의 의료 영상을 classification, segmentation, detection을 할 때, 많은 분들이 Kaggle과 같은 public data를 이용합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;반면 병원에서 마련한 private dataset을 이용할 때도 있습니다. 즉, 병원소속 연구원들은 private dataset을 이용해 병원 자체 연구를 하기도 하죠.&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;그런데, 병원 데이터(=private dataset)이라는 것이 대부분 임상데이터입니다.&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;Q. 임상시험이란?&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;A. &lt;span style=&quot;color: #202124;&quot;&gt;사람을 직접 대상으로, 사람에게서 추출(또는 적출)된 검체나 사람에 대한 정보를 이용하여 이루어지는 모든 시험&lt;span&gt; &amp;rarr; 즉, 임상데이터는 사람으로 부터 얻어지는 데이터&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;즉, 사람에 대한 데이터를 사용하는 것이기 때문에 굉장히 엄격한 윤리적 기준이 적용됩니다. 결국 연구자들은 이러한 윤리적 기준에 대해서 잘 알고 있어야 하기 때문에 관련 교육이 필요합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;2. 프로그램 수강 절차&lt;/h3&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;위에서 언급한 교육을 수강하는 순서를 말씀드리도록 하겠습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;2-1. 질병보건통합관리시스템 접속후 사용자 가입&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;해당 프로그램을 듣기 위해서는 &quot;질병관리청 교육시스템&quot;에 접속해야 합니다.&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&quot;질병관리청 교육시스템&quot;에 접속하기 위해서는 &quot;질병관리청 질병보건통합관리시스템&quot;에서 먼저 사용자 가입을 해야 합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;a href=&quot;https://is.kdca.go.kr/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://is.kdca.go.kr/&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1625463834715&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;website&quot; data-og-title=&quot;질병관리청 질병보건통합관리시스템&quot; data-og-description=&quot;&quot; data-og-host=&quot;is.kdca.go.kr&quot; data-og-source-url=&quot;https://is.kdca.go.kr/&quot; data-og-url=&quot;https://is.kdca.go.kr/&quot; data-og-image=&quot;&quot;&gt;&lt;a href=&quot;https://is.kdca.go.kr/&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://is.kdca.go.kr/&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url();&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;질병관리청 질병보건통합관리시스템&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;is.kdca.go.kr&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; width=&quot;724&quot; height=&quot;433&quot; data-origin-width=&quot;1289&quot; data-origin-height=&quot;771&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bFvKev/btq8TwXtCzI/TiiXKkPrjgHYQ5FKMY4OP0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bFvKev/btq8TwXtCzI/TiiXKkPrjgHYQ5FKMY4OP0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bFvKev/btq8TwXtCzI/TiiXKkPrjgHYQ5FKMY4OP0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbFvKev%2Fbtq8TwXtCzI%2FTiiXKkPrjgHYQ5FKMY4OP0%2Fimg.png&quot; width=&quot;724&quot; height=&quot;433&quot; data-origin-width=&quot;1289&quot; data-origin-height=&quot;771&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;가입 시 근무하고 있는 병원소속같은 정보를 기입해주면 정상적으로 가입이 완료됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;2-2. 질병관리청 교육시스템에서 임상교육개론 강의듣기&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;앞서 &quot;질병관리청 질병보건통합관리시스템&quot;에 사용자 가입을 완료 하셨다면, &quot;질병관리청 교육시스템&quot;에서 임상교육개론 강의를 들으시면 됩니다. (로그인시 공인인증서가 필요합니다)&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;강의를 듣는 순서는 아래와 같습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;1) 질병관리청 교육시스템 접속 및 로그인&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;a href=&quot;https://edu.kdca.go.kr/edu/index.html&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://edu.kdca.go.kr/edu/index.html&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1625465189386&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;website&quot; data-og-title=&quot;질병관리청 교육사이트&quot; data-og-description=&quot;&quot; data-og-host=&quot;edu.kdca.go.kr&quot; data-og-source-url=&quot;https://edu.kdca.go.kr/edu/index.html&quot; data-og-url=&quot;https://edu.kdca.go.kr/edu/index.html&quot; data-og-image=&quot;&quot;&gt;&lt;a href=&quot;https://edu.kdca.go.kr/edu/index.html&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://edu.kdca.go.kr/edu/index.html&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url();&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;질병관리청 교육사이트&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;edu.kdca.go.kr&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;2) 과정안내 클릭&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1209&quot; data-origin-height=&quot;853&quot; data-filename=&quot;제목 없음.png&quot; width=&quot;519&quot; height=&quot;366&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/biiolk/btq8OjSSrGf/ClENZWbvEvnGpVOtAh02Ek/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/biiolk/btq8OjSSrGf/ClENZWbvEvnGpVOtAh02Ek/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/biiolk/btq8OjSSrGf/ClENZWbvEvnGpVOtAh02Ek/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fbiiolk%2Fbtq8OjSSrGf%2FClENZWbvEvnGpVOtAh02Ek%2Fimg.png&quot; data-origin-width=&quot;1209&quot; data-origin-height=&quot;853&quot; data-filename=&quot;제목 없음.png&quot; width=&quot;519&quot; height=&quot;366&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;3) 과정명에 &quot;임상연구개론&quot; 입력 후, 관련 교육 신청&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1279&quot; data-origin-height=&quot;851&quot; data-filename=&quot;제목 없음.png&quot; width=&quot;671&quot; height=&quot;446&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/MLCgv/btq8RTZXMg1/TRMM3ek2VrwKZmDV1XspO0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/MLCgv/btq8RTZXMg1/TRMM3ek2VrwKZmDV1XspO0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/MLCgv/btq8RTZXMg1/TRMM3ek2VrwKZmDV1XspO0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FMLCgv%2Fbtq8RTZXMg1%2FTRMM3ek2VrwKZmDV1XspO0%2Fimg.png&quot; data-origin-width=&quot;1279&quot; data-origin-height=&quot;851&quot; data-filename=&quot;제목 없음.png&quot; width=&quot;671&quot; height=&quot;446&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;4) 나의 강의실에서 해당 강의 수강&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;영상길이는 2시간 정도이며, 따로 시험은 없었습니다.&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1138&quot; data-origin-height=&quot;731&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bd1h1J/btq8RtUImYT/235R4f1XkLvab6TtICRK3K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bd1h1J/btq8RtUImYT/235R4f1XkLvab6TtICRK3K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bd1h1J/btq8RtUImYT/235R4f1XkLvab6TtICRK3K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fbd1h1J%2Fbtq8RtUImYT%2F235R4f1XkLvab6TtICRK3K%2Fimg.png&quot; data-origin-width=&quot;1138&quot; data-origin-height=&quot;731&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;5) 이후 수료증출력을 클릭하시고, 수료증을 pdf로 다운받으시면 됩니다.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1167&quot; data-origin-height=&quot;745&quot; data-filename=&quot;제목 없음.png&quot; width=&quot;724&quot; height=&quot;462&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/egcBvn/btq8SA6TkvB/EUtssJepq6BikC1NXW1in0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/egcBvn/btq8SA6TkvB/EUtssJepq6BikC1NXW1in0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/egcBvn/btq8SA6TkvB/EUtssJepq6BikC1NXW1in0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FegcBvn%2Fbtq8SA6TkvB%2FEUtssJepq6BikC1NXW1in0%2Fimg.png&quot; data-origin-width=&quot;1167&quot; data-origin-height=&quot;745&quot; data-filename=&quot;제목 없음.png&quot; width=&quot;724&quot; height=&quot;462&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;※참고로 임상교육개론으로 받은 수료증은 주기적으로 갱신해줘야 한다니 참고해두세요!&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;3. IRB(&lt;span style=&quot;color: #4d5156;&quot;&gt;Institutional Review Board, 의학연구윤리심의위원회&lt;/span&gt;) 허가&lt;/h3&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;의료 데이터 연구를 이용하여 연구를 진행하기 위해서는 해당 의료 데이터를 보유하고 있는 기관(ex: 병원)에 정식으로 IRB 허가를 신청해야합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;IRB 허가 신청을 하기 위해서는 GCP (Good Clinical Practice) 교육을 이수해야 하는데, 앞선 &quot;임상교육개론&quot;이 이에 해당합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;IRB 허가신청 절차는 앞선 &quot;임상교육개론&quot; 수료증 및 몇 가지 첨부 파일을 구비하여, 내가 근무하고 있는 병원 IRB 사이트에 등록을 하면 허가절차가 진행됩니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;4. 과학기술인등록번호&lt;/h3&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;보통 병원에서 의료 데이터를 기반으로 진행하는 연구들은 국가지원과제인 경우가 많습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;그래서 보통 국가정보사이트에 과학기술인으로 등록되어야 하기 때문에 아래 사이트에서 과학기술인 등록을 진행합니다.&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;a href=&quot;https://www.ntis.go.kr/hurims/hmreg/researcher/reg/checkRealNm.do&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.ntis.go.kr/hurims/hmreg/researcher/reg/checkRealNm.do&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1625465968222&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;website&quot; data-og-title=&quot;NTIS &amp;gt; 국가연구자번호&quot; data-og-description=&quot;&quot; data-og-host=&quot;www.ntis.go.kr&quot; data-og-source-url=&quot;https://www.ntis.go.kr/hurims/hmreg/researcher/reg/checkRealNm.do&quot; data-og-url=&quot;https://www.ntis.go.kr/hurims/hmreg/researcher/reg/checkRealNm.do&quot; data-og-image=&quot;&quot;&gt;&lt;a href=&quot;https://www.ntis.go.kr/hurims/hmreg/researcher/reg/checkRealNm.do&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://www.ntis.go.kr/hurims/hmreg/researcher/reg/checkRealNm.do&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url();&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;NTIS &amp;gt; 국가연구자번호&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;www.ntis.go.kr&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;지금까지 병원에서 의료 데이터를 사용하기 위한 간단한 행정절차에 대해서 알아봤습니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;감사합니다.&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;[Reference]&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;a href=&quot;https://m.blog.naver.com/PostView.naver?isHttpsRedirect=true&amp;amp;blogId=atelierjpro&amp;amp;logNo=221330719904&quot;&gt;https://m.blog.naver.com/PostView.naver?isHttpsRedirect=true&amp;amp;blogId=atelierjpro&amp;amp;logNo=221330719904&lt;/a&gt;&amp;nbsp;&lt;/p&gt;
&lt;figure id=&quot;og_1625465704853&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;의료 데이터 연구자 윤리 교육 (필수)&quot; data-og-description=&quot;의료 데이터를 사용하는 연구를 진행하기 위해서는 데이터 보유 기관에 정식으로 IRB라는 허가 절차를 ...&quot; data-og-host=&quot;blog.naver.com&quot; data-og-source-url=&quot;https://m.blog.naver.com/PostView.naver?isHttpsRedirect=true&amp;amp;blogId=atelierjpro&amp;amp;logNo=221330719904&quot; data-og-url=&quot;https://blog.naver.com/atelierjpro/221330719904&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/rOpU9/hyKMHHMMI6/vpnd0JwFKDQlt8bN8zFy81/img.png?width=270&amp;amp;height=270&amp;amp;face=0_0_270_270&quot;&gt;&lt;a href=&quot;https://m.blog.naver.com/PostView.naver?isHttpsRedirect=true&amp;amp;blogId=atelierjpro&amp;amp;logNo=221330719904&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://m.blog.naver.com/PostView.naver?isHttpsRedirect=true&amp;amp;blogId=atelierjpro&amp;amp;logNo=221330719904&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/rOpU9/hyKMHHMMI6/vpnd0JwFKDQlt8bN8zFy81/img.png?width=270&amp;amp;height=270&amp;amp;face=0_0_270_270');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;의료 데이터 연구자 윤리 교육 (필수)&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;의료 데이터를 사용하는 연구를 진행하기 위해서는 데이터 보유 기관에 정식으로 IRB라는 허가 절차를 ...&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;blog.naver.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>Medical  AI research/Background</category>
      <category>Good Clinical Practice</category>
      <category>Medical AI</category>
      <category>과학기술인등록번호</category>
      <category>의료 인공지능</category>
      <category>임상교육개론</category>
      <category>질병관리청 교육시스템</category>
      <category>질병보건통합관리시스템</category>
      <author>Do-Woo-Ner</author>
      <guid isPermaLink="true">https://89douner.tistory.com/295</guid>
      <comments>https://89douner.tistory.com/295#entry295comment</comments>
      <pubDate>Fri, 8 Oct 2021 17:16:10 +0900</pubDate>
    </item>
    <item>
      <title>4-2. DICOM 파일 전처리 (Feat. 이미지 저장)</title>
      <link>https://89douner.tistory.com/336</link>
      <description>&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;안녕하세요.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;이번 글에서는 dicom 파일을 딥러닝 학습에 맞게 변환시켜주기 위해 조정해주는 전처리 기법에 대해서 알려드리려고 합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;1. DICOM 파일의 bit depth (Feat. Hounsfield unit)&lt;/h3&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;한 이미지를 구성하고 있는 pixel 값의 범위는 기본적으로 0~255입니다.&amp;nbsp; \(0\sim2^8\)&lt;span style=&quot;color: #666666;&quot;&gt;의 범위를 갖고 있기 때문에 8bit depth를 갖고 할 수 있습니다.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(&amp;darr;&amp;darr;&amp;darr;bit depth (or color depth; 색 깊이)에 대한 설명은 아래 글을 참고해 주세요&amp;darr;&amp;darr;&amp;darr;)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://89douner.tistory.com/293&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://89douner.tistory.com/293&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1633605340242&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;3-1. DICOM 파일이란? (Feat. Definition, PACS, digital image 습득과정)&quot; data-og-description=&quot;안녕하세요. 이번 글에서는 의료 영상(Medical imaging)에서 사용하는 DICOM 파일이 어떻게 생겨났는지, 어떻게 이용되고 있는지 알아보도록 하겠습니다. 또한 DICOM이라는 것이 digital image이기 때문에, &quot; data-og-host=&quot;89douner.tistory.com&quot; data-og-source-url=&quot;https://89douner.tistory.com/293&quot; data-og-url=&quot;https://89douner.tistory.com/293&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/dXV0Kp/hyLSrjrltV/bDcInPzlsJslLSUrkPiay1/img.jpg?width=259&amp;amp;height=195&amp;amp;face=0_0_259_195,https://scrap.kakaocdn.net/dn/btT0kF/hyLSz9BllL/FUcSmoEbNxJGEQYE95CKu0/img.jpg?width=259&amp;amp;height=195&amp;amp;face=0_0_259_195,https://scrap.kakaocdn.net/dn/hqwtu/hyLStuNuuu/bK0aFKzooZFkz9MmkLL7W0/img.png?width=1465&amp;amp;height=520&amp;amp;face=0_0_1465_520&quot;&gt;&lt;a href=&quot;https://89douner.tistory.com/293&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://89douner.tistory.com/293&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/dXV0Kp/hyLSrjrltV/bDcInPzlsJslLSUrkPiay1/img.jpg?width=259&amp;amp;height=195&amp;amp;face=0_0_259_195,https://scrap.kakaocdn.net/dn/btT0kF/hyLSz9BllL/FUcSmoEbNxJGEQYE95CKu0/img.jpg?width=259&amp;amp;height=195&amp;amp;face=0_0_259_195,https://scrap.kakaocdn.net/dn/hqwtu/hyLStuNuuu/bK0aFKzooZFkz9MmkLL7W0/img.png?width=1465&amp;amp;height=520&amp;amp;face=0_0_1465_520');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;3-1. DICOM 파일이란? (Feat. Definition, PACS, digital image 습득과정)&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;안녕하세요. 이번 글에서는 의료 영상(Medical imaging)에서 사용하는 DICOM 파일이 어떻게 생겨났는지, 어떻게 이용되고 있는지 알아보도록 하겠습니다. 또한 DICOM이라는 것이 digital image이기 때문에,&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;89douner.tistory.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;imagematrix.png&quot; data-origin-width=&quot;704&quot; data-origin-height=&quot;290&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/vAYtg/btrg4Kabx8u/3KQ1QlJesKW4hX0cnWAvDk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/vAYtg/btrg4Kabx8u/3KQ1QlJesKW4hX0cnWAvDk/img.png&quot; data-alt=&quot;&amp;amp;lt;그림 출처: https://ai.stanford.edu/~syyeung/cvweb/tutorial1.html&amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/vAYtg/btrg4Kabx8u/3KQ1QlJesKW4hX0cnWAvDk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FvAYtg%2Fbtrg4Kabx8u%2F3KQ1QlJesKW4hX0cnWAvDk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;704&quot; height=&quot;290&quot; data-filename=&quot;imagematrix.png&quot; data-origin-width=&quot;704&quot; data-origin-height=&quot;290&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처: https://ai.stanford.edu/~syyeung/cvweb/tutorial1.html&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;하지만, dicom 파일을 구성하고 있는 pixel은 0~255의 범위만 갖고 있는 것이 아니라, &quot;-x ~ +x&quot; 와 같이 범위가 마이너스부터 시작 되는 경우도 있고, bit depth가 12bit, 16bit로 구성되어 있는 것도 많습니다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;예를 들어, CT에는 hounsfield unit (HU)이라는 것이 있습니다. &lt;span&gt;X-ray 또는 CT 촬영 원리는&lt;span&gt; &lt;/span&gt;&lt;/span&gt;사람에게 방사선을 쏴서 방사능 투과율에 따라 detector 부분이 다르게 보이는 것입니다.&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;Projectional_radiography_components.jpg&quot; data-origin-width=&quot;1200&quot; data-origin-height=&quot;875&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/HutHK/btrg8D10PuM/MKLrp5njKicrDCwkUhAka1/img.jpg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/HutHK/btrg8D10PuM/MKLrp5njKicrDCwkUhAka1/img.jpg&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/HutHK/btrg8D10PuM/MKLrp5njKicrDCwkUhAka1/img.jpg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FHutHK%2Fbtrg8D10PuM%2FMKLrp5njKicrDCwkUhAka1%2Fimg.jpg&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;472&quot; height=&quot;344&quot; data-filename=&quot;Projectional_radiography_components.jpg&quot; data-origin-width=&quot;1200&quot; data-origin-height=&quot;875&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;결국 투과율에 따라 detector에 입력되는 값들이 다를 텐데, hounsfield라는 사람은 HU라는 단위를 통해 물체를 구분했습니다. HU 단위는 물을 기준으로 하고 뼈는 400~1000 HU 사이의 값을 갖고, 공기는 -1000의 값을 갖는다고 정의했습니다. 값이 작을 수록 검은색을 띄며, 값이 클 수록 흰색을 띕니다 (gray scale과 비슷하죠)&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;The-Hounsfield-scale-of-CT-numbers.png&quot; data-origin-width=&quot;592&quot; data-origin-height=&quot;304&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/CrW9l/btrg3rBZuGH/vhXYUK22zekoj5aqjgj7GK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/CrW9l/btrg3rBZuGH/vhXYUK22zekoj5aqjgj7GK/img.png&quot; data-alt=&quot;&amp;amp;lt;그림 출처: https://www.researchgate.net/figure/The-Hounsfield-scale-of-CT-numbers_fig2_306033192&amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/CrW9l/btrg3rBZuGH/vhXYUK22zekoj5aqjgj7GK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FCrW9l%2Fbtrg3rBZuGH%2FvhXYUK22zekoj5aqjgj7GK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;592&quot; height=&quot;304&quot; data-filename=&quot;The-Hounsfield-scale-of-CT-numbers.png&quot; data-origin-width=&quot;592&quot; data-origin-height=&quot;304&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처: https://www.researchgate.net/figure/The-Hounsfield-scale-of-CT-numbers_fig2_306033192&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;Hounsfield-scale-table.png&quot; data-origin-width=&quot;658&quot; data-origin-height=&quot;537&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/1SdOa/btrg7GZsgsG/KjlJM5BFkjdQT6XqfTqFKk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/1SdOa/btrg7GZsgsG/KjlJM5BFkjdQT6XqfTqFKk/img.png&quot; data-alt=&quot;&amp;amp;lt;그림 출처: https://www.researchgate.net/figure/Hounsfield-scale-table_tbl1_327863426&amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/1SdOa/btrg7GZsgsG/KjlJM5BFkjdQT6XqfTqFKk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F1SdOa%2Fbtrg7GZsgsG%2FKjlJM5BFkjdQT6XqfTqFKk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;455&quot; height=&quot;371&quot; data-filename=&quot;Hounsfield-scale-table.png&quot; data-origin-width=&quot;658&quot; data-origin-height=&quot;537&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처: https://www.researchgate.net/figure/Hounsfield-scale-table_tbl1_327863426&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;다운로드.png&quot; data-origin-width=&quot;403&quot; data-origin-height=&quot;125&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bw5K1u/btrg7GZr9VQ/fb2lWnzjhGHaTNGXHNZVG1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bw5K1u/btrg7GZr9VQ/fb2lWnzjhGHaTNGXHNZVG1/img.png&quot; data-alt=&quot;&amp;amp;lt;그림 출처: https://processing.org/tutorials/color&amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bw5K1u/btrg7GZr9VQ/fb2lWnzjhGHaTNGXHNZVG1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fbw5K1u%2Fbtrg7GZr9VQ%2Ffb2lWnzjhGHaTNGXHNZVG1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;403&quot; height=&quot;125&quot; data-filename=&quot;다운로드.png&quot; data-origin-width=&quot;403&quot; data-origin-height=&quot;125&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처: https://processing.org/tutorials/color&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;실제로 dicom 파일 하나를 가져와서&amp;nbsp; max, min 값을 뿌려보면 아래와 같이 나옵니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(Note. DICOM 파일에서 추출한 pixel 값들이 완전히 HU 값을 표현한다고 볼 순 없습니다. 그 이유에 대해서는 다음 글의 &quot;CT 전처리 관련 글&quot;에서 'slope intercept equation' 부분에서 설명하도록 하겠습니다)&lt;/span&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;931&quot; data-origin-height=&quot;326&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/DAmQc/btrg8Zk2L6q/qkSKK6iXKT0cHHreHXzHUK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/DAmQc/btrg8Zk2L6q/qkSKK6iXKT0cHHreHXzHUK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/DAmQc/btrg8Zk2L6q/qkSKK6iXKT0cHHreHXzHUK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FDAmQc%2Fbtrg8Zk2L6q%2FqkSKK6iXKT0cHHreHXzHUK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;931&quot; height=&quot;326&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;931&quot; data-origin-height=&quot;326&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;2. DICOM 파일 전처리&lt;/h3&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;앞서 DICOM 파일의 bit depth (color depth)와 실제 이미지 bit depth는 다르다고 언급했습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;딥러닝 모델에 들어가는 값은 보통 bit depth를 기반으로 하기 때문에, 이에 맞게 바꾸어 주어야 합니다. 즉, 12bit depth였다면, 0~255 range를 갖는 8bit depth로 변경시켜주어야 하는 것이죠.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;8bit depth로 변경해주는 방법은 아래와 같습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;12bit depth 값들을 모두 0~1 사이로 normalization 해주기&lt;/li&gt;
&lt;li&gt;0~1 사이로 normalization 해준 값들에 255를 곱해주어 이미지에 맞는 8bit depth로 변경해주기&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;그림1.png&quot; data-origin-width=&quot;1282&quot; data-origin-height=&quot;656&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/be8Jm5/btrg8En4pOt/yfOMYRryia5SVs1QFLNK9K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/be8Jm5/btrg8En4pOt/yfOMYRryia5SVs1QFLNK9K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/be8Jm5/btrg8En4pOt/yfOMYRryia5SVs1QFLNK9K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fbe8Jm5%2Fbtrg8En4pOt%2FyfOMYRryia5SVs1QFLNK9K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1282&quot; height=&quot;656&quot; data-filename=&quot;그림1.png&quot; data-origin-width=&quot;1282&quot; data-origin-height=&quot;656&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;(&amp;darr;&amp;darr;&amp;darr;Normalization 공식&amp;darr;&amp;darr;&amp;darr;)&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;0_swXB8rwwga2eVY0W.png&quot; data-origin-width=&quot;341&quot; data-origin-height=&quot;225&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/0K52C/btrhdlArru4/wnhRs85OVlHQ3YjpSWi1Ok/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/0K52C/btrhdlArru4/wnhRs85OVlHQ3YjpSWi1Ok/img.png&quot; data-alt=&quot;&amp;amp;lt;그림 출처: https://python.plainenglish.io/how-to-de-normalize-and-de-standardize-data-in-python-b4600cf9ee6&amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/0K52C/btrhdlArru4/wnhRs85OVlHQ3YjpSWi1Ok/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F0K52C%2FbtrhdlArru4%2FwnhRs85OVlHQ3YjpSWi1Ok%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;341&quot; height=&quot;225&quot; data-filename=&quot;0_swXB8rwwga2eVY0W.png&quot; data-origin-width=&quot;341&quot; data-origin-height=&quot;225&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처: https://python.plainenglish.io/how-to-de-normalize-and-de-standardize-data-in-python-b4600cf9ee6&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;이렇게 바꿔준 값들은 이미지로 저장시키거나 numpy로 저장시킵니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그리고, albumentation 같은 곳에서 load하기 쉽게 formatting을 해주죠.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(&amp;darr;&amp;darr;&amp;darr; albumentation 관련 글&amp;darr;&amp;darr;&amp;darr;)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://89douner.tistory.com/312?category=1001221&quot;&gt;https://89douner.tistory.com/312?category=1001221&lt;/a&gt;&amp;nbsp;&lt;/p&gt;
&lt;figure id=&quot;og_1633680907534&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;1-2. Data Load (Feat. Albumentations)&quot; data-og-description=&quot;안녕하세요. 이번 글에서는 Albumentations라는 패키지를 이용하여 데이터를 로드하는 방법에 대해서 설명하도록 하겠습니다. https://github.com/albumentations-team/albumentations GitHub - albumentations-te..&quot; data-og-host=&quot;89douner.tistory.com&quot; data-og-source-url=&quot;https://89douner.tistory.com/312?category=1001221&quot; data-og-url=&quot;https://89douner.tistory.com/312&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/cB1GCA/hyLTvSPXNy/Ymq3pXShk9qGe8uUTUiWck/img.png?width=757&amp;amp;height=424&amp;amp;face=0_0_757_424,https://scrap.kakaocdn.net/dn/cYGgVv/hyLSqyQnof/lXJhYJ9pXbVYnVYE6KRoJ1/img.png?width=757&amp;amp;height=424&amp;amp;face=0_0_757_424,https://scrap.kakaocdn.net/dn/o0V0z/hyLTyvhYfT/oN03r1a4seWF8hwm0z7QA1/img.png?width=1027&amp;amp;height=703&amp;amp;face=0_0_1027_703&quot;&gt;&lt;a href=&quot;https://89douner.tistory.com/312?category=1001221&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://89douner.tistory.com/312?category=1001221&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/cB1GCA/hyLTvSPXNy/Ymq3pXShk9qGe8uUTUiWck/img.png?width=757&amp;amp;height=424&amp;amp;face=0_0_757_424,https://scrap.kakaocdn.net/dn/cYGgVv/hyLSqyQnof/lXJhYJ9pXbVYnVYE6KRoJ1/img.png?width=757&amp;amp;height=424&amp;amp;face=0_0_757_424,https://scrap.kakaocdn.net/dn/o0V0z/hyLTyvhYfT/oN03r1a4seWF8hwm0z7QA1/img.png?width=1027&amp;amp;height=703&amp;amp;face=0_0_1027_703');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;1-2. Data Load (Feat. Albumentations)&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;안녕하세요. 이번 글에서는 Albumentations라는 패키지를 이용하여 데이터를 로드하는 방법에 대해서 설명하도록 하겠습니다. https://github.com/albumentations-team/albumentations GitHub - albumentations-te..&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;89douner.tistory.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;지금까지 DICOM 파일 전처리 관련된 설명이었습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;감사합니다~&lt;/span&gt;&lt;/p&gt;</description>
      <category>Medical  AI research/Background</category>
      <author>Do-Woo-Ner</author>
      <guid isPermaLink="true">https://89douner.tistory.com/336</guid>
      <comments>https://89douner.tistory.com/336#entry336comment</comments>
      <pubDate>Thu, 7 Oct 2021 21:01:32 +0900</pubDate>
    </item>
    <item>
      <title>Contrastive learning이란? (Feat. Contrastive loss)</title>
      <link>https://89douner.tistory.com/334</link>
      <description>&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;안녕하세요.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;이번 글에서는&lt;b&gt; contrastive learning&lt;/b&gt;에 대해서 설명하도록 하겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;Contrast&lt;/b&gt;라는 &lt;b&gt;용어&lt;/b&gt;를 정의하면 아래와 같습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;A&amp;nbsp;contrast&amp;nbsp;is&amp;nbsp;a&amp;nbsp;great&amp;nbsp;difference&amp;nbsp;between&amp;nbsp;two&amp;nbsp;or&amp;nbsp;more&amp;nbsp;things&amp;nbsp;which&amp;nbsp;is&amp;nbsp;clear&amp;nbsp;when&amp;nbsp;you&amp;nbsp;&lt;span style=&quot;color: #000000;&quot;&gt;compare&amp;nbsp;&lt;/span&gt;them.&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그렇다면, &lt;b&gt;contrastive learning&lt;/b&gt;이라는 것은 &lt;b&gt;대상들의 차이를 좀 더 명확하게 보여줄 수 있도록 학습&lt;/b&gt; 한다는 뜻이 되겠죠? &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;'대상들의 차이'&lt;/b&gt;라는 말에서 중점적으로 봐야 할 것은 &lt;b&gt;'차이'&lt;/b&gt;라는 용어입니다. 보통 어떤 &lt;b&gt;'기준'&lt;/b&gt;으로 인해 &lt;b&gt;'차이'&lt;/b&gt;가 발생합니다.&lt;b&gt;&amp;nbsp;&lt;/b&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;예를 들어, &lt;b&gt;어떤 이미지들이 서로 유사&lt;/b&gt;하다고 판단하게 하기 위해서는&lt;b&gt; 어떤 기준&lt;/b&gt;들이 적용되어야 할까요? 즉, 어떤 &lt;b&gt;'기준'&lt;/b&gt;을 적용하면 이미지들이 비슷한지, 비슷하지 않은지에 대한 '차이'를 만들어 낼 수 있을까요?&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;고양이&lt;/b&gt;라는 &lt;b&gt;이미지&lt;/b&gt;가 있다고 가정해보겠습니다. 고양이 이미지에 &lt;b&gt;augmentation&lt;/b&gt;을 주게 되도 그 이미지는 고양이 일 것입니다. 즉, &lt;b&gt;원본 고양&lt;/b&gt;이 이미지와 &lt;b&gt;augmentation이 적용된 고양이 이미지&lt;/b&gt;는 &lt;b&gt;서로 유사(=positive pair)&lt;/b&gt;하다고 할 수 있죠. &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;그림22.png&quot; data-origin-width=&quot;1883&quot; data-origin-height=&quot;526&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/XQVuV/btrgoYFLH2m/nYxMxsmbkdiwp29lacefW0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/XQVuV/btrgoYFLH2m/nYxMxsmbkdiwp29lacefW0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/XQVuV/btrgoYFLH2m/nYxMxsmbkdiwp29lacefW0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FXQVuV%2FbtrgoYFLH2m%2FnYxMxsmbkdiwp29lacefW0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1883&quot; height=&quot;526&quot; data-filename=&quot;그림22.png&quot; data-origin-width=&quot;1883&quot; data-origin-height=&quot;526&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;누군가(제3자)가 augmented image에 굳이 'similar'라고 labeling 해줄 필요 없이 input data 자기 자신(self)에 의해 파생된 라벨링(&lt;span&gt;supervised &lt;span&gt;&amp;larr; &lt;/span&gt;&lt;/span&gt;ex: augmented image) 데이터로 학습(learning)하기 때문에 self-supervlsed learning이라고 할 수 있습니다.&amp;nbsp; &amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;하지만 'similar' 정의는 어떤 기준을 삼느냐에 따라 굉장히 달라 질 수 있습니다.&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;물론, similar를 찾는 방법도 굉장히 다양하겠죠. 지금부터 이에 대해서 천천히 알아보도록 하겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;1. Similarity learning&amp;nbsp;&lt;/h3&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;앞서 언급한 내용 중에 가장 핵심적이고 자주 등장하는 용어가 &lt;b&gt;'similar'&lt;/b&gt;입니다. 그렇다면, &lt;b&gt;contrastive learning과 similarity learning&lt;/b&gt;은 어떤&lt;b&gt; 관계&lt;/b&gt;가 있을까요? 먼저, &lt;b&gt;similarity learning&lt;/b&gt;의 &lt;b&gt;정의&lt;/b&gt;부터 살펴보겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&quot;Similarity&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;learning is closely related to regression and classification, but the goal is to learn a similarity function that measures&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;color: #ff0000;&quot;&gt;how similar or related two objects are&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;.&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;결국, &lt;b&gt;contrastive learning&lt;/b&gt;과 &lt;b&gt;similarity learning&lt;/b&gt; 모두 다 &lt;b&gt;어떤 객체들에 대한 유사도&lt;/b&gt;와 관련이 있다는걸 알 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;그림1.png&quot; data-origin-width=&quot;1239&quot; data-origin-height=&quot;686&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cV6GM4/btrgBIxgeyo/wtbabROgtqeqoDVBsZD8Ek/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cV6GM4/btrgBIxgeyo/wtbabROgtqeqoDVBsZD8Ek/img.png&quot; data-alt=&quot;&amp;amp;lt;그림 출처: https://www.youtube.com/watch?v=OkcS4qE4Zsg&amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cV6GM4/btrgBIxgeyo/wtbabROgtqeqoDVBsZD8Ek/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcV6GM4%2FbtrgBIxgeyo%2FwtbabROgtqeqoDVBsZD8Ek%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1239&quot; height=&quot;686&quot; data-filename=&quot;그림1.png&quot; data-origin-width=&quot;1239&quot; data-origin-height=&quot;686&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처: https://www.youtube.com/watch?v=OkcS4qE4Zsg&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;color: #000000; font-family: 'Noto Sans Light';&quot;&gt;좀 더 similarity learning을 알아볼까요?&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;color: #000000; font-family: 'Noto Sans Light';&quot;&gt;먼저, similarity learning의 3가지 종류들에 대해서 알아보도록 하겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;color: #000000; font-family: 'Noto Sans Light';&quot;&gt;(&amp;darr;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;darr;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;darr;참고 자료&lt;span style=&quot;color: #000000;&quot;&gt;&amp;darr;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;darr;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;darr;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;color: #000000; font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Similarity_learning&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://en.wikipedia.org/wiki/Similarity_learning&lt;/a&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1633241931359&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;website&quot; data-og-title=&quot;Similarity learning - Wikipedia&quot; data-og-description=&quot;Similarity learning is an area of supervised machine learning in artificial intelligence. It is closely related to regression and classification, but the goal is to learn a similarity function that measures how similar or related two objects are. It has ap&quot; data-og-host=&quot;en.wikipedia.org&quot; data-og-source-url=&quot;https://en.wikipedia.org/wiki/Similarity_learning&quot; data-og-url=&quot;https://en.wikipedia.org/wiki/Similarity_learning&quot; data-og-image=&quot;&quot;&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Similarity_learning&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://en.wikipedia.org/wiki/Similarity_learning&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url();&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;Similarity learning - Wikipedia&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;Similarity learning is an area of supervised machine learning in artificial intelligence. It is closely related to regression and classification, but the goal is to learn a similarity function that measures how similar or related two objects are. It has ap&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;en.wikipedia.org&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;1-1. Regression similarity learning&lt;/span&gt;&lt;/b&gt;&lt;/span&gt;&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;그림2.png&quot; data-origin-width=&quot;1638&quot; data-origin-height=&quot;169&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/DBOHG/btrgBhmfv2V/K3mdpFJXzkz4ew0p4kOpf0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/DBOHG/btrgBhmfv2V/K3mdpFJXzkz4ew0p4kOpf0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/DBOHG/btrgBhmfv2V/K3mdpFJXzkz4ew0p4kOpf0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FDBOHG%2FbtrgBhmfv2V%2FK3mdpFJXzkz4ew0p4kOpf0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1638&quot; height=&quot;169&quot; data-filename=&quot;그림2.png&quot; data-origin-width=&quot;1638&quot; data-origin-height=&quot;169&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;두 객체(ex: 이미지)간의 유사도를 알고 있다는 전제하에 &lt;b&gt;supervised learning 학습&lt;/b&gt;을 시키는 것&lt;/li&gt;
&lt;li&gt;유사도는 어떤 기준(pre-defined)에 의해 설정되는데&lt;span style=&quot;color: #000000;&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;이 기준에 의해 모델이 학습됨&lt;/span&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;span style=&quot;color: #000000;&quot;&gt;위의 &lt;b&gt;y&lt;/b&gt;가 &lt;b&gt;유사도&lt;/b&gt;를 나타내 주는 &lt;b&gt;값&lt;/b&gt; &amp;rarr; 즉, 유사도가 높으면 y 값이 높게 설정 됨&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color: #000000;&quot;&gt;앞서 설정한 유사도에 따라 모델이 학습되면, 학습된 모델에 &lt;span style=&quot;color: #000000;&quot;&gt;test &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;데이터인 두 객체&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;(ex:&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;이미지&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;)&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;가 입력 될 때&lt;/span&gt;&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;pre-defined&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt; 기준에 따라 유사도를 결정&lt;/span&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Ex)&lt;span style=&quot;color: #000000;&quot;&gt;&amp;nbsp;강아지 이미지 데이터들 간에는 모두 강한 유사도를 주고&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;,&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;강아지 이미지 데이터와 고양이 이미지 데이터들의 유사도는 굉장히 낮은 값으로 설정해주어 학습 시키면&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;,&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;학습 한 모델은 강아지 이미지들끼리에 대해서 높은 유사도 값을&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;regression&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;할 것 입니다.&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/ul&gt;
&lt;/ul&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;하지만&lt;span style=&quot;color: #000000;&quot;&gt;,&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;이러한 유사도&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;(y)&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;를 미리 알고 어떻게 할당해줄 것인지는 굉장히 어려운 문제&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;&lt;span style=&quot;color: #000000; font-family: 'Noto Sans Light';&quot;&gt;1-2. Classification similarity learning&lt;/span&gt;&lt;/b&gt;&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;그림3.png&quot; data-origin-width=&quot;1337&quot; data-origin-height=&quot;259&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/NoJbO/btrgDdJS6sp/kUYKNCwXUEpcB1DoLy3H50/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/NoJbO/btrgDdJS6sp/kUYKNCwXUEpcB1DoLy3H50/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/NoJbO/btrgDdJS6sp/kUYKNCwXUEpcB1DoLy3H50/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FNoJbO%2FbtrgDdJS6sp%2FkUYKNCwXUEpcB1DoLy3H50%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1337&quot; height=&quot;259&quot; data-filename=&quot;그림3.png&quot; data-origin-width=&quot;1337&quot; data-origin-height=&quot;259&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;[&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;앞선 &quot;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;Regression similarity learning&quot; 방식과&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;nbsp;다른 점&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;]&lt;/span&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Regression: y&amp;isin;R
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;유사도(=y) 값의 범위는 실수 (ex: 0~1) &amp;rarr; 유사도의 정도를 파악할 수 있음&lt;/li&gt;
&lt;li&gt;이때, R값의 범위를 어떻게 설정하고, 어떤 y값을 해줘야 하는지 어려움 (soften label 개념정도로 봐도 될 듯)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Classification: y&lt;span style=&quot;color: #000000;&quot;&gt; &amp;isin;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;{0,1} &lt;/span&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;span style=&quot;color: #000000;&quot;&gt;두 객체가 유사한지 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;아닌지만&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt; 알려주기 때문에 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;입력으로 들어오는 두 객체&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;(ex:&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;이미지&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;)&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;가 어느 정도로 유사한지는 알 수 없음&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Demilight', 'Noto Sans KR';&quot;&gt;1-3. &lt;span style=&quot;color: #000000;&quot;&gt;Ranking similarity learning&lt;/span&gt;&lt;/span&gt;&lt;/b&gt;&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;그림6.png&quot; data-origin-width=&quot;1805&quot; data-origin-height=&quot;281&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cnyH2R/btrgBJpuQ0O/TO09I8U1aEmi2B8WOCqXDk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cnyH2R/btrgBJpuQ0O/TO09I8U1aEmi2B8WOCqXDk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cnyH2R/btrgBJpuQ0O/TO09I8U1aEmi2B8WOCqXDk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcnyH2R%2FbtrgBJpuQ0O%2FTO09I8U1aEmi2B8WOCqXDk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1805&quot; height=&quot;281&quot; data-filename=&quot;그림6.png&quot; data-origin-width=&quot;1805&quot; data-origin-height=&quot;281&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;[&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;앞선 두 가지 방식(&quot;regression or classification similarity learning&quot;)과 다른 점&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;]&lt;/span&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;앞선 두 가지 방식과 다른 부분은 &quot;세 가지 입력 데이터 (triplets of objects)&quot;를 필요로 한다는 점입니다.&lt;/li&gt;
&lt;li&gt;일반적인 데이터 &quot;x&quot;와 &quot;x와 유사한 x+&quot;, &quot;x와 유사하지 않은 x-&quot; 데이터가 입력으로 들어갑니다.&lt;/li&gt;
&lt;li&gt;이런식으로 유사한 데이터들 간의 유사도와 유사하지 않은 데이터들 간의 유사도 차이를 위와 같이 설정하여 학습하게 됩니다.&amp;nbsp;&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;결국 contrastive learning과 similarity learning 모두 데이터들 간의 similarity를 알아내는 것이 목적입니다.&lt;/span&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&quot;Contrastive learning is an approach to formulate the task of finding similar and dissimilar things for an ML model.&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;a href=&quot;https://lilianweng.github.io/lil-log/2021/05/31/contrastive-representation-learning.html#nce&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;&lt;i&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&quot;The main idea of contrastive learning is to learn representations such that similar samples stay close to each other, while dissimilar ones are far apart.&quot;&lt;/span&gt;&lt;/i&gt;&lt;/a&gt;&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;그림7.png&quot; data-origin-width=&quot;983&quot; data-origin-height=&quot;610&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bVKJID/btrgCHxVjT5/XFPMNY6SpfUkiu61Ca3Lk1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bVKJID/btrgCHxVjT5/XFPMNY6SpfUkiu61Ca3Lk1/img.png&quot; data-alt=&quot;&amp;amp;lt;이미지 출처 논문: Deep Metric Learning via Lifted Structured Feature Embedding&amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bVKJID/btrgCHxVjT5/XFPMNY6SpfUkiu61Ca3Lk1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbVKJID%2FbtrgCHxVjT5%2FXFPMNY6SpfUkiu61Ca3Lk1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;703&quot; height=&quot;436&quot; data-filename=&quot;그림7.png&quot; data-origin-width=&quot;983&quot; data-origin-height=&quot;610&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;이미지 출처 논문: Deep Metric Learning via Lifted Structured Feature Embedding&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;color: #202122;&quot;&gt;&quot;Similarity learning is closely related to&amp;nbsp;&lt;/span&gt;distance metric learning&lt;span style=&quot;color: #202122;&quot;&gt;.&quot;&amp;nbsp;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;데이터들끼리 &lt;b&gt;유사도&lt;/b&gt;가 높다는 것을 &lt;b&gt;거리(distance)&lt;/b&gt;의 &lt;b&gt;관점&lt;/b&gt;에서 &lt;b&gt;해석&lt;/b&gt;해볼 수 도 있습니다. 예를 들어, &lt;b&gt;유사한 데이터 끼리는 거리가 가깝다&lt;/b&gt;는 식으로 해석해 볼 수 있는 것이죠. 그래서, &lt;b&gt;similarity learning, contrastive learning&lt;/b&gt;을 배우다 보면&lt;b&gt; distance metric learning&lt;/b&gt;이라는 용어가 자주 등장합니다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;2. (Distance) Metric learning&lt;/h3&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;유사도&lt;/b&gt;를 &lt;b&gt;판단&lt;/b&gt;하는데 있어서는 굉장히 &lt;b&gt;다양한 기준&lt;/b&gt;이 적용될 수 있습니다.&amp;nbsp; &lt;b&gt;유사도&lt;/b&gt;를&lt;b&gt; 판단&lt;/b&gt;하는&amp;nbsp;한 가지 방법은&lt;b&gt;&amp;nbsp;거리&lt;/b&gt;의 &lt;b&gt;관점&lt;/b&gt;에서 해석하는 것입니다.&amp;nbsp; &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;보통 거리라는 개념을 단순히 점과 점 사이의 최단 거리로만 이해하는 경우가 있지만, 거리를 측정하는 방식에는 다양한 방법이 존재 합니다. 즉, &quot;거리&quot;라는 개념을 어떻게 해석하느냐가 굉장히 중요한 문제라고 볼 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;결국, &lt;b&gt;두 객체간의 거리를 측정할 수 있는 방법이 다양&lt;/b&gt;하기 때문에 &lt;b&gt;두 객체간의 유사도를 적용할 수 있는 기준이 다양하다&lt;/b&gt;고 할 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;color: #202122;&quot;&gt;&quot;Similarity learning is closely related to&amp;nbsp;&lt;/span&gt;distance metric learning&lt;span style=&quot;color: #202122;&quot;&gt;. Metric learning is the task of learning a distance function over objects.&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;위의 정의에 따라&amp;nbsp;&lt;b&gt;metric learning&lt;/b&gt;은 &lt;b&gt;객체간의 거리를 학습&lt;/b&gt;하는 방법들에 대해 &lt;b&gt;연구&lt;/b&gt;하는 &lt;b&gt;분야&lt;/b&gt;라고 할 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그렇다면, &lt;b&gt;metric&lt;/b&gt;이라는 &lt;b&gt;개념&lt;/b&gt;부터 &lt;b&gt;정의&lt;/b&gt;해볼까요?&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;i&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;b&gt;&quot;A Metric is a function that quantifies a &amp;ldquo;distance&amp;rdquo; between every pair of elements in a set, thus inducing a measure of similarity.&quot;&lt;/b&gt;&lt;/span&gt;&lt;/i&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;객체(데이터)들 간의 거리(or 유사도)를 수량화 하는 방법&lt;/b&gt;은 &lt;b&gt;여러가지&lt;/b&gt;가 있습니다. (결국, 우리가 배우는 contrastive learning 기법에서도 아래와 같은 metric들 중에 어느 것을 사용하느냐에 따라 다양한 연구가 진행 될 수 있겠죠.)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;그림8.png&quot; data-origin-width=&quot;416&quot; data-origin-height=&quot;440&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/drxprc/btrgB5eL67t/R5Zl9KYxYMyPOzkGZNXjZK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/drxprc/btrgB5eL67t/R5Zl9KYxYMyPOzkGZNXjZK/img.png&quot; data-alt=&quot;&amp;amp;lt;그림 출처:&amp;amp;nbsp; https://slazebni.cs.illinois.edu/spring17/lec09_similarity.pdf&amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/drxprc/btrgB5eL67t/R5Zl9KYxYMyPOzkGZNXjZK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fdrxprc%2FbtrgB5eL67t%2FR5Zl9KYxYMyPOzkGZNXjZK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;483&quot; height=&quot;511&quot; data-filename=&quot;그림8.png&quot; data-origin-width=&quot;416&quot; data-origin-height=&quot;440&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처:&amp;nbsp; https://slazebni.cs.illinois.edu/spring17/lec09_similarity.pdf&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;Metric learning에서 중요한 개념은 '&lt;b&gt;거리&lt;/b&gt;'입니다. 그렇기 때문에, &lt;b&gt;metric learning&lt;/b&gt;에서 정의한 &lt;b&gt;4가지 속성&lt;/b&gt;도 '&lt;b&gt;거리'의 속성&lt;/b&gt;을 그대로 &lt;b&gt;반영&lt;/b&gt;하고 있죠.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;&lt;span style=&quot;color: #000000;&quot;&gt;A metric or distance function must obey four axioms&lt;/span&gt;&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;Non-negativity: f(x,y)&amp;ge;0
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;x, y 두 데이터 간의 거리는 음수가 될 수 없다.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Identity of Discernible: f(x,y)=0 &amp;lt;=&amp;gt; x=y
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;x, y 두 데이터 간의 거리가 0이라면, x와 y 데이터는 동일하다.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Symmetry: f(x,y) = f(y,x)
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&quot;x,y&quot; 간의 거리나, &quot;y,x&quot; 간의 거리는 같다&amp;nbsp;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Triangle Inequality: f(x,z)&amp;le;f(x,y)+f(y,z)
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&quot;x,z&quot;간의 거리는 &quot;x,y&quot; 간의 거리와 &quot;y,z&quot;간의 거리를 합한 것보다 클 수 없다.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;그림13.png&quot; data-origin-width=&quot;559&quot; data-origin-height=&quot;168&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/lNF6B/btrgMsMOwkf/KJrONebbOSP7xSkfoYcrW0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/lNF6B/btrgMsMOwkf/KJrONebbOSP7xSkfoYcrW0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/lNF6B/btrgMsMOwkf/KJrONebbOSP7xSkfoYcrW0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FlNF6B%2FbtrgMsMOwkf%2FKJrONebbOSP7xSkfoYcrW0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;559&quot; height=&quot;168&quot; data-filename=&quot;그림13.png&quot; data-origin-width=&quot;559&quot; data-origin-height=&quot;168&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;2-1. Metric&amp;nbsp; of Two types&lt;/span&gt;&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;거리를 측정하는 &lt;b&gt;metric&lt;/b&gt; 방식에도 크게 &lt;b&gt;두 가지 종류&lt;/b&gt;가 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;Pre-defined Metrics
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;단순히 데이터들을 정의 된 metric 공식에 입력하고 '거리' 값을 도출하여 유사도 비교&amp;nbsp;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Learned metrics
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;데이터들로 부터 추정할 수 있는 다른 지표들을 metric 공식에 적용하여 '거리' 값을 도출&amp;nbsp;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;1071&quot; data-origin-height=&quot;767&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bUJxTD/btrgBggxItL/9I43HKRHGXygD6dGaoA5BK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bUJxTD/btrgBggxItL/9I43HKRHGXygD6dGaoA5BK/img.png&quot; data-alt=&quot;&amp;amp;lt;그림 출처: https://slazebni.cs.illinois.edu/spring17/lec09_similarity.pdf&amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bUJxTD/btrgBggxItL/9I43HKRHGXygD6dGaoA5BK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbUJxTD%2FbtrgBggxItL%2F9I43HKRHGXygD6dGaoA5BK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;572&quot; height=&quot;410&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;1071&quot; data-origin-height=&quot;767&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처: https://slazebni.cs.illinois.edu/spring17/lec09_similarity.pdf&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;'Learned metrics'&lt;/b&gt; 방식의&lt;b&gt; 대표주자&lt;/b&gt;라고 할 수 있는 것은 딥러닝을 이용한&lt;b&gt; deep metric learning&lt;/b&gt; 방식입니다. 그럼 지금부터 deep metric learning에 대해 좀 더 알아보도록 하겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;2-2. Deep Metric Learning (Feat. Contrastive loss)&lt;/span&gt;&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;객체(데이터)&lt;/b&gt;들이 만약 &lt;b&gt;고차원&lt;/b&gt;이라면 서로 간의&lt;b&gt; 유사도&lt;/b&gt;를 &lt;b&gt;비교&lt;/b&gt;하는건 굉장히 &lt;b&gt;어려운 문제&lt;/b&gt;가 될 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;예를 들어, &lt;b&gt;의미적으로 가깝다고 생각되는 고차원 공간에서의 두 샘플&lt;/b&gt;&lt;/span&gt;&lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;(A,B)&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;간의 실제 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;Euclidean distance&lt;/span&gt;&lt;/b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;b&gt;는 먼 경우가 많습&lt;/b&gt;니다&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;.&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;그 이유는 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;ldquo;curse of dimension&amp;rdquo;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;으로 인해 의미 있는 &lt;/span&gt;&lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;manifold&lt;/span&gt;&lt;/b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;b&gt;를 찾지 못했기 때문&lt;/b&gt;이죠.&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;[추후에 Auto-Encoder 관련 글이 완성되면 manifold에 대한 보충 설명 글로써 링크를 걸어두도록 하겠습니다]&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;그림10.png&quot; data-origin-width=&quot;991&quot; data-origin-height=&quot;850&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/Bx8jQ/btrgMsMM8qD/g1UqpSVpTFYkDROTcYkvL0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/Bx8jQ/btrgMsMM8qD/g1UqpSVpTFYkDROTcYkvL0/img.png&quot; data-alt=&quot;&amp;amp;lt;그림 출처:&amp;amp;nbsp; https://slidetodoc.com/image-manifolds-a-a-efros-16-721-learningbased/&amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/Bx8jQ/btrgMsMM8qD/g1UqpSVpTFYkDROTcYkvL0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FBx8jQ%2FbtrgMsMM8qD%2Fg1UqpSVpTFYkDROTcYkvL0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;686&quot; height=&quot;589&quot; data-filename=&quot;그림10.png&quot; data-origin-width=&quot;991&quot; data-origin-height=&quot;850&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처:&amp;nbsp; https://slidetodoc.com/image-manifolds-a-a-efros-16-721-learningbased/&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;즉&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;,&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;실제&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;Euclidean distance&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;는&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;manifold&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;상에서 구해야 하기 때문에, &lt;/span&gt;&lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;manifold&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;를 잘 찾는 것이 두 데이터간 유의미한 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;similarity&lt;/span&gt;&lt;/b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;b&gt;를 구하는데 결정적인 역할&lt;/b&gt;을 할 수 있겠죠. &lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;그림11.png&quot; data-origin-width=&quot;613&quot; data-origin-height=&quot;790&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bG6Asm/btrgFSZOyNA/1YWe0fRWNpyOepFV8eorb1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bG6Asm/btrgFSZOyNA/1YWe0fRWNpyOepFV8eorb1/img.png&quot; data-alt=&quot;&amp;amp;lt;그림 출처:&amp;amp;nbsp; https://slidetodoc.com/image-manifolds-a-a-efros-16-721-learningbased/&amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bG6Asm/btrgFSZOyNA/1YWe0fRWNpyOepFV8eorb1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbG6Asm%2FbtrgFSZOyNA%2F1YWe0fRWNpyOepFV8eorb1%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;530&quot; height=&quot;683&quot; data-filename=&quot;그림11.png&quot; data-origin-width=&quot;613&quot; data-origin-height=&quot;790&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처:&amp;nbsp; https://slidetodoc.com/image-manifolds-a-a-efros-16-721-learningbased/&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;결국, &lt;b&gt;유의미한 manifold를 찾기 위해서는 dimension reduction 방식이 필요&lt;/b&gt;한데, 그것이 오늘날 자주 사용되는 &lt;b&gt;deep neural network&lt;/b&gt;이죠.&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;그림15.png&quot; data-origin-width=&quot;995&quot; data-origin-height=&quot;527&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/c44ltg/btrgB41tTTx/vtiN5r25nLOKVXkUv5t800/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/c44ltg/btrgB41tTTx/vtiN5r25nLOKVXkUv5t800/img.png&quot; data-alt=&quot;&amp;amp;lt;그림 출처: http://daddynkidsmakers.blogspot.com/2021/05/&amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/c44ltg/btrgB41tTTx/vtiN5r25nLOKVXkUv5t800/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fc44ltg%2FbtrgB41tTTx%2FvtiN5r25nLOKVXkUv5t800%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;624&quot; height=&quot;331&quot; data-filename=&quot;그림15.png&quot; data-origin-width=&quot;995&quot; data-origin-height=&quot;527&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처: http://daddynkidsmakers.blogspot.com/2021/05/&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;결국, 특정 metric(ex: Euclidean distance)을 기준으로 한 유사도를 찾기 위해 deep learning model의 parameter들이 학습된다면, 이는 해당 meteric을 찾기 위한 &lt;/span&gt;&lt;b&gt;manifold를 찾는 과정이라고 볼 수 있고, 이 과정 자체가 &quot;estimated from the data&quot;를 의미&lt;/b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;하기 때문에,&amp;nbsp;&lt;/span&gt;&lt;b&gt;learned metrics&lt;/b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;라고 볼 수 있는 것입니다.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;그림9.png&quot; data-origin-width=&quot;1842&quot; data-origin-height=&quot;628&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/b4RKYL/btrgBJwcWdP/4XcTRnKCK8Nn3WftkOEhCk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/b4RKYL/btrgBJwcWdP/4XcTRnKCK8Nn3WftkOEhCk/img.png&quot; data-alt=&quot;&amp;amp;lt;그림 출처:&amp;amp;nbsp; https://tyami.github.io/deep%20learning/Siamese-neural-networks/&amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/b4RKYL/btrgBJwcWdP/4XcTRnKCK8Nn3WftkOEhCk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fb4RKYL%2FbtrgBJwcWdP%2F4XcTRnKCK8Nn3WftkOEhCk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1842&quot; height=&quot;628&quot; data-filename=&quot;그림9.png&quot; data-origin-width=&quot;1842&quot; data-origin-height=&quot;628&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처:&amp;nbsp; https://tyami.github.io/deep%20learning/Siamese-neural-networks/&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;다시 말해, deep metric learning이란 deep neural network를 이용하여 적절한 manifold를 찾아 metric learning을 연구하는 분야라고 정리할 수 있습니다.&lt;/span&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;UIUC 대학의 SVETLANA LAZEBNIK 교수&lt;/b&gt;의 &lt;b&gt;similarity learning ppt자료&lt;/b&gt;를 살펴보면 &lt;b&gt;deep neural network를 이용하여 metric learening을 하는 것을 확인&lt;/b&gt;해 볼 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;blob&quot; data-origin-width=&quot;1805&quot; data-origin-height=&quot;685&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/wyH5C/btrgCFAaGDK/mSWBwvz2Qd9Y3ZkikbFR71/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/wyH5C/btrgCFAaGDK/mSWBwvz2Qd9Y3ZkikbFR71/img.png&quot; data-alt=&quot;&amp;amp;lt;그림 출처:&amp;amp;nbsp;&amp;amp;nbsp;https://slazebni.cs.illinois.edu/spring17/lec09_similarity.pdf&amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/wyH5C/btrgCFAaGDK/mSWBwvz2Qd9Y3ZkikbFR71/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FwyH5C%2FbtrgCFAaGDK%2FmSWBwvz2Qd9Y3ZkikbFR71%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1805&quot; height=&quot;685&quot; data-filename=&quot;blob&quot; data-origin-width=&quot;1805&quot; data-origin-height=&quot;685&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처:&amp;nbsp;&amp;nbsp;https://slazebni.cs.illinois.edu/spring17/lec09_similarity.pdf&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;[deep metric learning의 예시]&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;먼저, 유사한 이미지를 한 쌍으로 한&amp;nbsp;&lt;b&gt;positive pair 끼리는 Euclidian Loss가 최소화&lt;/b&gt;가 되도록&lt;b&gt; 학습&lt;/b&gt; 시키면, &lt;b&gt;deep neural network는 고차원 원본 데이터 positive pair끼리 거리가 가깝도록 low dimension으로 dimension reduction(or embedding)&lt;/b&gt; 할 것입니다. 즉, positive pair끼리는 Euclidian loss가 최소화 되게 parameter들이 학습된 것인데, 이것을 원본 데이터로 부터 추정(estimation)되었다고 볼 수 있기 때문에 learned metric이라고 한 것이죠.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;그림14.png&quot; data-origin-width=&quot;1031&quot; data-origin-height=&quot;451&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/LPPnN/btrgGs7M5DZ/3bkLCyqMtSJHIArGB8QkXk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/LPPnN/btrgGs7M5DZ/3bkLCyqMtSJHIArGB8QkXk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/LPPnN/btrgGs7M5DZ/3bkLCyqMtSJHIArGB8QkXk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FLPPnN%2FbtrgGs7M5DZ%2F3bkLCyqMtSJHIArGB8QkXk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;528&quot; height=&quot;231&quot; data-filename=&quot;그림14.png&quot; data-origin-width=&quot;1031&quot; data-origin-height=&quot;451&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;또한, &lt;b&gt;negative pair 끼리는 Euclidan distance 값이 커지도록 설정&lt;/b&gt;해줄 수 있습니다. 아래 수식을 보면, margin (m) 이라는 개념이 도입되는데, &lt;b&gt;margin은 negative pair간의 최소한의 거리&lt;/b&gt;를 의미합니다. 예를 들어, 우리는 loss 값이 최소가 되기를 바라는데, negative pair (xn, xq) 의 거리가 m 보다 작다면 계속해서 loss 값을 생성해낼 것 입니다.&amp;nbsp; 그런데, 만약 학습을 통해 negative pair 간의 거리가 m 보다 크게 되면 loss 값을 0으로 수렴시킬 수 있게되죠.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;그림17.png&quot; data-origin-width=&quot;982&quot; data-origin-height=&quot;545&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bHxRkX/btrgAVKDB1b/E3t4TEgokumRLHXkc0Dm80/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bHxRkX/btrgAVKDB1b/E3t4TEgokumRLHXkc0Dm80/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bHxRkX/btrgAVKDB1b/E3t4TEgokumRLHXkc0Dm80/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbHxRkX%2FbtrgAVKDB1b%2FE3t4TEgokumRLHXkc0Dm80%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;561&quot; height=&quot;311&quot; data-filename=&quot;그림17.png&quot; data-origin-width=&quot;982&quot; data-origin-height=&quot;545&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;위에서 언급한 두 수식을 결합한 loss를 contrastive loss라고 합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;그림23.png&quot; data-origin-width=&quot;1304&quot; data-origin-height=&quot;424&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/blc1Qx/btrgBiew68p/kXBFY6n36ArZJ0q8oo2c2K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/blc1Qx/btrgBiew68p/kXBFY6n36ArZJ0q8oo2c2K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/blc1Qx/btrgBiew68p/kXBFY6n36ArZJ0q8oo2c2K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fblc1Qx%2FbtrgBiew68p%2FkXBFY6n36ArZJ0q8oo2c2K%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;663&quot; height=&quot;216&quot; data-filename=&quot;그림23.png&quot; data-origin-width=&quot;1304&quot; data-origin-height=&quot;424&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;쉽게, &lt;b&gt;contrastive loss를 통해 학습을 한다는 것은 두 데이터가 negative pair일 때, margin 이상의 거리를 갖게 하도록 학습하는 것과 동일&lt;/b&gt;하다고 할 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;blob&quot; data-origin-width=&quot;432&quot; data-origin-height=&quot;474&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/nwlgE/btrgCGF3jTk/5mLxdhRlSBQOtGBqWUs5x0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/nwlgE/btrgCGF3jTk/5mLxdhRlSBQOtGBqWUs5x0/img.png&quot; data-alt=&quot;&amp;amp;lt;그림 출처: https://pizpaz.github.io/paper/ml/Ranked-List-Loss-for-Deep-Metric-Learning/&amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/nwlgE/btrgCGF3jTk/5mLxdhRlSBQOtGBqWUs5x0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FnwlgE%2FbtrgCGF3jTk%2F5mLxdhRlSBQOtGBqWUs5x0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;381&quot; height=&quot;418&quot; data-filename=&quot;blob&quot; data-origin-width=&quot;432&quot; data-origin-height=&quot;474&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처: https://pizpaz.github.io/paper/ml/Ranked-List-Loss-for-Deep-Metric-Learning/&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;Pytorch&lt;/b&gt;에서는 &lt;b&gt;negative pair&lt;/b&gt; 뿐만 아니라, &lt;b&gt;positive pair&lt;/b&gt;에 대한 &lt;b&gt;margin&lt;/b&gt; 값도 설정해놓고 있습니다. (참고로 아래 LpDistance는 Euclidean distance를 의미합니다)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;그림22.png&quot; data-origin-width=&quot;1563&quot; data-origin-height=&quot;538&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/FD3VK/btrgHDO6iXu/RcE6i3UqU86b2hwBy4yWkk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/FD3VK/btrgHDO6iXu/RcE6i3UqU86b2hwBy4yWkk/img.png&quot; data-alt=&quot;&amp;amp;lt;그림 출처: https://www.semanticscholar.org/paper/The-General-Pair-based-Weighting-Loss-for-Deep-Liu-Cheng/6c55bcc205b24c7aaf39680d71716e598a3cc536&amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/FD3VK/btrgHDO6iXu/RcE6i3UqU86b2hwBy4yWkk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FFD3VK%2FbtrgHDO6iXu%2FRcE6i3UqU86b2hwBy4yWkk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;1563&quot; height=&quot;538&quot; data-filename=&quot;그림22.png&quot; data-origin-width=&quot;1563&quot; data-origin-height=&quot;538&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처: https://www.semanticscholar.org/paper/The-General-Pair-based-Weighting-Loss-for-Deep-Liu-Cheng/6c55bcc205b24c7aaf39680d71716e598a3cc536&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(&amp;darr;&amp;darr;&amp;darr; Pytorch metric learning loss &amp;darr;&amp;darr;&amp;darr;)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#contrastiveloss&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#contrastiveloss&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1633272658727&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;website&quot; data-og-title=&quot;Losses - PyTorch Metric Learning&quot; data-og-description=&quot;Losses All loss functions are used as follows: from pytorch_metric_learning import losses loss_func = losses.SomeLoss() loss = loss_func(embeddings, labels) # in your training for-loop Or if you are using a loss in conjunction with a miner: from pytorch_me&quot; data-og-host=&quot;kevinmusgrave.github.io&quot; data-og-source-url=&quot;https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#contrastiveloss&quot; data-og-url=&quot;https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#contrastiveloss&quot; data-og-image=&quot;&quot;&gt;&lt;a href=&quot;https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#contrastiveloss&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#contrastiveloss&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url();&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;Losses - PyTorch Metric Learning&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;Losses All loss functions are used as follows: from pytorch_metric_learning import losses loss_func = losses.SomeLoss() loss = loss_func(embeddings, labels) # in your training for-loop Or if you are using a loss in conjunction with a miner: from pytorch_me&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;kevinmusgrave.github.io&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;참고로, &lt;b&gt;margin&lt;/b&gt;이라는 &lt;b&gt;개념&lt;/b&gt;을 &lt;b&gt;이용&lt;/b&gt;하게 되면 경우에 따라 &lt;b&gt;negative pairs의 관계&lt;/b&gt;를 크게 &lt;b&gt;3가지로 나눌&lt;/b&gt; 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;d: distance를 구하는 함수&lt;/li&gt;
&lt;li&gt;a: positive, negative pair의 기준이 되는 데이터&lt;/li&gt;
&lt;li&gt;Hard Negative Mining: positive pair에 해당하는 margin 안에 negative sample이 포함되어 있는 경우&lt;/li&gt;
&lt;li&gt;Semi-Hard Negative Mining: positive pair margin 범위 안에 속하진 않지만, negative pair margin 범위 안에도 속하지 않는 경우&lt;/li&gt;
&lt;li&gt;Easy Negative Mining: negative pair margin 범위에 속하는 경우&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;symmetry-11-01066-g004.png&quot; data-origin-width=&quot;3241&quot; data-origin-height=&quot;1630&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/qZPPI/btrgBKCcolq/u0mi4sogxEas5GPSC9J3Jk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/qZPPI/btrgBKCcolq/u0mi4sogxEas5GPSC9J3Jk/img.png&quot; data-alt=&quot;&amp;amp;lt;그림 출처: https://www.mdpi.com/2073-8994/11/9/1066/htm&amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/qZPPI/btrgBKCcolq/u0mi4sogxEas5GPSC9J3Jk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FqZPPI%2FbtrgBKCcolq%2Fu0mi4sogxEas5GPSC9J3Jk%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;3241&quot; height=&quot;1630&quot; data-filename=&quot;symmetry-11-01066-g004.png&quot; data-origin-width=&quot;3241&quot; data-origin-height=&quot;1630&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처: https://www.mdpi.com/2073-8994/11/9/1066/htm&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;결국, contrastive loss를 이용하여 deep metric learning을 하게 되면 아래 그림 같이, 유사한 데이터들끼리 clustering이 될 것 입니다.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;images.jfif&quot; data-origin-width=&quot;333&quot; data-origin-height=&quot;151&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/eGTDgW/btrgOs0jlLf/IZNg4HnGMZ4cJsPkLD3Xu0/img.jpg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/eGTDgW/btrgOs0jlLf/IZNg4HnGMZ4cJsPkLD3Xu0/img.jpg&quot; data-alt=&quot;&amp;amp;lt;그림 출처: https://www.sciencedirect.com/science/article/abs/pii/S0925231219306800&amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/eGTDgW/btrgOs0jlLf/IZNg4HnGMZ4cJsPkLD3Xu0/img.jpg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FeGTDgW%2FbtrgOs0jlLf%2FIZNg4HnGMZ4cJsPkLD3Xu0%2Fimg.jpg&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;461&quot; height=&quot;209&quot; data-filename=&quot;images.jfif&quot; data-origin-width=&quot;333&quot; data-origin-height=&quot;151&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처: https://www.sciencedirect.com/science/article/abs/pii/S0925231219306800&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;사실, &lt;b&gt;contrastive loss&lt;/b&gt;라는 용어와 개념은 &lt;b&gt;&quot;Dimensionality Reduction by Learning an Invariant Mapping&quot;&lt;/b&gt;이라는 &lt;b&gt;논문에서 기원&lt;/b&gt;했습니다. 아래 논문의 저자로 우리가 익히 알고 있는 &lt;b&gt;'Yann LeCun'&lt;/b&gt; 교수님도 있으시네요.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;ldquo;Contrastive loss (Chopra et al. 2005) is one of the earliest training objectives used for &lt;/span&gt;&lt;span style=&quot;color: #ff0000;&quot;&gt;deep metric learning&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt; in a contrastive fashion.&amp;rdquo;&lt;/span&gt;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;그림24.png&quot; data-origin-width=&quot;735&quot; data-origin-height=&quot;539&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bkvMVu/btrgDUYaqwM/giM0pDA5cyKj8dRAgxlnb0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bkvMVu/btrgDUYaqwM/giM0pDA5cyKj8dRAgxlnb0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bkvMVu/btrgDUYaqwM/giM0pDA5cyKj8dRAgxlnb0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbkvMVu%2FbtrgDUYaqwM%2FgiM0pDA5cyKj8dRAgxlnb0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;577&quot; height=&quot;423&quot; data-filename=&quot;그림24.png&quot; data-origin-width=&quot;735&quot; data-origin-height=&quot;539&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;앞서, 소개한&lt;b&gt; contrastive loss는 contrastive learning의 한 종류&lt;/b&gt;입니다. &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;즉, contrastive learning 이라는 것은 데이터들 간의 특정한 기준에 의해 유사도를 측정하는 방식인데, &lt;b&gt;contrastive loss는&amp;nbsp;positive pair와 negative pair 간의 유사도를 &lt;span style=&quot;color: #ee2323;&quot;&gt;Euclidean distance 또는 cosine similairty를 이용&lt;/span&gt;해 측&lt;/b&gt;정하여, positive pair 끼리는 가깝게, negative pair 끼리는 멀게 하도록 하는 deep metric learning (or learned metric) 이라고 정리할 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(참고로, contrastive learning을 굳이 deep neural network로 하지 않아도 되지만, deep neural network의 강력한 효용성 때문에 deep neural network를 기반으로한 deep metric learning 방식인 contrastive learning을 하려고 하는 것이 죠.) &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span&gt;Positive pair 끼리는 가깝게,&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;span&gt;negative pair 끼리는 멀게 하도록 하는 deep metric learning (or learned metric)&lt;span&gt;&amp;nbsp;기반의&lt;/span&gt;&lt;/span&gt;&lt;span&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;b&gt;contrastive learning&lt;/b&gt;&amp;nbsp;&lt;b&gt;종류&lt;/b&gt;는 굉장히 &lt;b&gt;다양&lt;/b&gt;합니다.&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;즉, &lt;b&gt;유사도&lt;/b&gt;를 &lt;b&gt;측정&lt;/b&gt;하는 &lt;b&gt;방식&lt;/b&gt;이&lt;b&gt; 다양&lt;/b&gt;하죠. 예를 들어, &lt;b&gt;infoNCE&lt;/b&gt;는 &lt;b&gt;mutual information&lt;/b&gt;이라는 &lt;b&gt;개념&lt;/b&gt;을 &lt;b&gt;기반&lt;/b&gt;으로 &lt;b&gt;유사도&lt;/b&gt;를 측정합니다. (Triplet loss는 이미 similarity learning에서 간단히 설명한 바 있습니다).&amp;nbsp; &lt;span&gt;(Mutual information 관련 설명은 다음 글에서 하도록 하겠습니다).&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;416&quot; data-origin-height=&quot;337&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bcsarN/btrgBKa8faj/fIjuosGNPxKT6PpEGjCISK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bcsarN/btrgBKa8faj/fIjuosGNPxKT6PpEGjCISK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bcsarN/btrgBKa8faj/fIjuosGNPxKT6PpEGjCISK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbcsarN%2FbtrgBKa8faj%2FfIjuosGNPxKT6PpEGjCISK%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;320&quot; height=&quot;259&quot; data-filename=&quot;제목 없음.png&quot; data-origin-width=&quot;416&quot; data-origin-height=&quot;337&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;지금까지의 설명을 기반으로 봤을 때&lt;b&gt; deep metric learning 기반의 contrastive learning이라는 분야&lt;/b&gt;를 다룰 때 &lt;b&gt;중요&lt;/b&gt;하게 다루어야 하는 &lt;b&gt;개념&lt;/b&gt;이&lt;b&gt; 두 가지&lt;/b&gt;가 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;Similarity Measure (Metric)&lt;/span&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;Contrastive learning은 positive pair 끼리는 가깝게,&amp;nbsp;negative pair 끼리는 멀게 하도록 해주는 것이 목적입니다.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;이 때, positive pair라는 것을 상징하는 유사도 값의 종류는 굉장히 다양합니다.&lt;/span&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;ex1) Euclidean distance &amp;rarr; Contrastive loss&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;ex2) Mutual information &amp;rarr; infoNCE&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;Dimension Reduction (deep neural network; Nonlinear dimension reduction)&lt;/span&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;Contrastive learning 즉, 데이터들 간의 유사도를 비교하는데 있어서 굳이 deep neural network를 사용할 필요는 없습니다.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;하지만, 고차원 데이터들 간의 유사도를 비교하는 것은 쉬운일이 아닙니다.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;그래서, 유사도 기준에 알맞도록 고차원 데이터를 저차원으로 dimension reduction 하는 방법이 중요합니다.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;Deep neural network는 이미지와 같은 고차원 데이터를 저차원 feature로 embedding 할 수 있는 강력한 dimension reduction 기능을 갖고 있습니다.&amp;nbsp;&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;(&amp;darr;&amp;darr;&amp;darr; 딥러닝 외 nonlinear dimensionality reduction 방식 &amp;darr;&amp;darr;&amp;darr;)&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-ke-mobileStyle=&quot;widthOrigin&quot; data-filename=&quot;그림25.png&quot; data-origin-width=&quot;894&quot; data-origin-height=&quot;577&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/XYrpg/btrgGsAc0k0/cBFUgDso8A59KWvPqhxNX0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/XYrpg/btrgGsAc0k0/cBFUgDso8A59KWvPqhxNX0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/XYrpg/btrgGsAc0k0/cBFUgDso8A59KWvPqhxNX0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FXYrpg%2FbtrgGsAc0k0%2FcBFUgDso8A59KWvPqhxNX0%2Fimg.png&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot; loading=&quot;lazy&quot; width=&quot;566&quot; height=&quot;365&quot; data-filename=&quot;그림25.png&quot; data-origin-width=&quot;894&quot; data-origin-height=&quot;577&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;개인적으로는, &lt;b&gt;contrastive learning&lt;/b&gt;을 &lt;b&gt;연구&lt;/b&gt;하기 위해서는&lt;b&gt; 두 가지 key word&lt;/b&gt; 인, &lt;b&gt;&quot;Metric learning&quot;&lt;/b&gt;, &lt;b&gt;&quot;Deep Neural Network&quot;&lt;/b&gt;에 초점을 맞추는게 중요하다고 생각합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;지금까지 Contrasitve learning에 대해 간단히 정리해봤습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;이번 contrastive learning&lt;/b&gt;에서는 &lt;b&gt;유사도&lt;/b&gt;를&lt;b&gt; Euclidean distance&lt;/b&gt;를 기준으로 한 &lt;b&gt;contrasitve loss&lt;/b&gt;를 소개했습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;다음 글&lt;/b&gt;에서는 &lt;b&gt;contrastive learning&lt;/b&gt; 중에 &lt;b&gt;mutual information&lt;/b&gt;을 유사도의 기준으로 삼은 &lt;b&gt;다양한 loss&lt;/b&gt;에 대해 소개해 보도록하겠습니다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;감사합니다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;[Reference site]&lt;/span&gt;&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://lilianweng.github.io/lil-log/2021/05/31/contrastive-representation-learning.html#contrastive-loss&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://lilianweng.github.io/lil-log/2021/05/31/contrastive-representation-learning.html#contrastive-loss&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1633268604885&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;Contrastive Representation Learning&quot; data-og-description=&quot;The main idea of contrastive learning is to learn representations such that similar samples stay close to each other, while dissimilar ones are far apart. Contrastive learning can be applied to both supervised and unsupervised data and has been shown to ac&quot; data-og-host=&quot;lilianweng.github.io&quot; data-og-source-url=&quot;https://lilianweng.github.io/lil-log/2021/05/31/contrastive-representation-learning.html#contrastive-loss&quot; data-og-url=&quot;https://lilianweng.github.io/2021/05/31/contrastive-representation-learning.html&quot; data-og-image=&quot;&quot;&gt;&lt;a href=&quot;https://lilianweng.github.io/lil-log/2021/05/31/contrastive-representation-learning.html#contrastive-loss&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://lilianweng.github.io/lil-log/2021/05/31/contrastive-representation-learning.html#contrastive-loss&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url();&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;Contrastive Representation Learning&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;The main idea of contrastive learning is to learn representations such that similar samples stay close to each other, while dissimilar ones are far apart. Contrastive learning can be applied to both supervised and unsupervised data and has been shown to ac&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;lilianweng.github.io&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://slazebni.cs.illinois.edu/spring17/lec09_similarity.pdf&quot;&gt;https://slazebni.cs.illinois.edu/spring17/lec09_similarity.pdf&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>Self-Supervised Learning/Contrastive learning (2018~)</category>
      <category>contrastive learning</category>
      <category>contrastive loss</category>
      <category>deep metric learning</category>
      <category>metric learning</category>
      <category>similarity learning</category>
      <author>Do-Woo-Ner</author>
      <guid isPermaLink="true">https://89douner.tistory.com/334</guid>
      <comments>https://89douner.tistory.com/334#entry334comment</comments>
      <pubDate>Wed, 29 Sep 2021 19:32:53 +0900</pubDate>
    </item>
    <item>
      <title>Self-Supervised learning이란?</title>
      <link>https://89douner.tistory.com/332</link>
      <description>&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;안녕하세요.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;이번 글에서는&lt;b&gt; self-supervised learning&lt;/b&gt;이 무엇인지 설명해보려고 합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;Self-supervised learning는 개념이 어떻게 해서 탄생하게 됐고, 현재 어떠한 방향으로 학습이 되고 있는지 간단히 살펴보겠습니다.&lt;/span&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;1. Unsupervised learning&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;/span&gt;&lt;/span&gt;&lt;/h3&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;Unsupervised learning&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;은 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;label(&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;정답&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;)&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;이 없는 데이터로 학습하는 모든 방법론을 일컫습니다&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;514&quot; data-origin-height=&quot;385&quot; data-filename=&quot;그림1.gif&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/xp9K2/btrgoJIcLAc/giUdeFc2MpRk0xoSc4lGjk/img.gif&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/xp9K2/btrgoJIcLAc/giUdeFc2MpRk0xoSc4lGjk/img.gif&quot; data-alt=&quot;&amp;amp;amp;lt;그림 출처:&amp;amp;amp;nbsp; https://machinelearningknowledge.ai/supervised-vs-unsupervised-learning/&amp;amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/xp9K2/btrgoJIcLAc/giUdeFc2MpRk0xoSc4lGjk/img.gif&quot; srcset=&quot;https://blog.kakaocdn.net/dn/xp9K2/btrgoJIcLAc/giUdeFc2MpRk0xoSc4lGjk/img.gif&quot; data-origin-width=&quot;514&quot; data-origin-height=&quot;385&quot; data-filename=&quot;그림1.gif&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처:&amp;nbsp; https://machinelearningknowledge.ai/supervised-vs-unsupervised-learning/&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;라벨이 없이 학습 할 수 있는 사례는 아래와 같이 다양 합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;597&quot; data-origin-height=&quot;478&quot; data-filename=&quot;그림2.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bsvxb5/btrgoBQ5isG/yM7NByL1FkiFxGKtiJk0Y1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bsvxb5/btrgoBQ5isG/yM7NByL1FkiFxGKtiJk0Y1/img.png&quot; data-alt=&quot;&amp;amp;amp;lt;그림 출처:&amp;amp;amp;nbsp; https://www.researchgate.net/figure/Taxonomy-of-Unsupervised-Learning-Techniques_fig1_319952798&amp;amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bsvxb5/btrgoBQ5isG/yM7NByL1FkiFxGKtiJk0Y1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fbsvxb5%2FbtrgoBQ5isG%2FyM7NByL1FkiFxGKtiJk0Y1%2Fimg.png&quot; data-origin-width=&quot;597&quot; data-origin-height=&quot;478&quot; data-filename=&quot;그림2.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처:&amp;nbsp; https://www.researchgate.net/figure/Taxonomy-of-Unsupervised-Learning-Techniques_fig1_319952798&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;위의 taxonomy에서 &lt;b&gt;&quot;Unsupervised Learning Techniquess&quot; &amp;rarr; &quot;Hierarchical learning&quot; &amp;rarr; &quot;Deep learning&quot;&lt;/b&gt;의 대표적인 사례는 Generative model 중 하나인 &lt;b&gt;GAN&lt;/b&gt;이라고 볼 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span&gt;(&amp;darr;&amp;darr;&amp;darr;GAN 관련 정리된 글&amp;darr;&amp;darr;&amp;darr;)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span&gt;&lt;a href=&quot;https://89douner.tistory.com/329&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://89douner.tistory.com/329&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1632895939329&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;5-1. GAN (Part1. GAN architecture)&quot; data-og-description=&quot;안녕하세요. 이번 글에서는 최초의 GAN 논문인 &amp;quot;Generative Adversarial Nets&amp;quot;을 리뷰하려고 합니다. 우선, GAN이라는 모델이 설명할 내용이 많다고 판단하여 파트를 두 개로 나누었습니다. Part1에서는 GAN a&quot; data-og-host=&quot;89douner.tistory.com&quot; data-og-source-url=&quot;https://89douner.tistory.com/329&quot; data-og-url=&quot;https://89douner.tistory.com/329&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/ZgJGF/hyLNilxpyt/ghxlb9M9GkAKhizs3EgbT0/img.png?width=800&amp;amp;height=415&amp;amp;face=0_0_800_415,https://scrap.kakaocdn.net/dn/bGE1QN/hyLLFWXRH9/Z05ZkwKsZUUJe2isSbwaWk/img.png?width=800&amp;amp;height=415&amp;amp;face=0_0_800_415,https://scrap.kakaocdn.net/dn/b1bu3b/hyLNfoN0Zl/6hqcLJ0KpkUQXVna8Y6s61/img.png?width=1054&amp;amp;height=539&amp;amp;face=0_0_1054_539&quot;&gt;&lt;a href=&quot;https://89douner.tistory.com/329&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://89douner.tistory.com/329&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/ZgJGF/hyLNilxpyt/ghxlb9M9GkAKhizs3EgbT0/img.png?width=800&amp;amp;height=415&amp;amp;face=0_0_800_415,https://scrap.kakaocdn.net/dn/bGE1QN/hyLLFWXRH9/Z05ZkwKsZUUJe2isSbwaWk/img.png?width=800&amp;amp;height=415&amp;amp;face=0_0_800_415,https://scrap.kakaocdn.net/dn/b1bu3b/hyLNfoN0Zl/6hqcLJ0KpkUQXVna8Y6s61/img.png?width=1054&amp;amp;height=539&amp;amp;face=0_0_1054_539');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;5-1. GAN (Part1. GAN architecture)&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;안녕하세요. 이번 글에서는 최초의 GAN 논문인 &quot;Generative Adversarial Nets&quot;을 리뷰하려고 합니다. 우선, GAN이라는 모델이 설명할 내용이 많다고 판단하여 파트를 두 개로 나누었습니다. Part1에서는 GAN a&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;89douner.tistory.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;딥러닝 외에도&lt;b&gt; 기존&lt;/b&gt;에 &lt;b&gt;dimension reduction&lt;/b&gt; task에서도 다양한 &lt;b&gt;unsupervied learning 모델&lt;/b&gt;들이 사용 됐는데, 그 중에 대표적으로 알고 있는 것들은&lt;b&gt; t-SNE, auto-encoder&lt;/b&gt; 같은 것들이 있습니다. T-SNE, auto-encoder 모두 라벨 없이 데이터의 latent space를 찾는 dimension reduction 기법이라 할 수 있습니다. (auto-encoder는 애초에 처음에 dimension reduction 방식으로 제안이 되었죠)&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(&lt;span style=&quot;color: #777777;&quot;&gt;&amp;darr;&lt;/span&gt;&lt;span style=&quot;color: #777777;&quot;&gt;&amp;darr;&lt;/span&gt;&lt;span style=&quot;color: #777777;&quot;&gt;&amp;darr;&lt;/span&gt;T-SNE의 embeding 결과 예시&lt;span style=&quot;color: #777777;&quot;&gt;&amp;darr;&lt;/span&gt;&lt;span style=&quot;color: #777777;&quot;&gt;&amp;darr;&lt;/span&gt;&lt;span style=&quot;color: #777777;&quot;&gt;&amp;darr;)&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;526&quot; data-origin-height=&quot;207&quot; data-filename=&quot;그림3.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/E7Vjv/btrgogfnkaR/Zq7QYrgdWHaUDlnfaQ9GjK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/E7Vjv/btrgogfnkaR/Zq7QYrgdWHaUDlnfaQ9GjK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/E7Vjv/btrgogfnkaR/Zq7QYrgdWHaUDlnfaQ9GjK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FE7Vjv%2FbtrgogfnkaR%2FZq7QYrgdWHaUDlnfaQ9GjK%2Fimg.png&quot; data-origin-width=&quot;526&quot; data-origin-height=&quot;207&quot; data-filename=&quot;그림3.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(&lt;span style=&quot;color: #777777;&quot;&gt;&amp;darr;&lt;/span&gt;&lt;span style=&quot;color: #777777;&quot;&gt;&amp;darr;&lt;/span&gt;&lt;span style=&quot;color: #777777;&quot;&gt;&amp;darr;Auto-Encoder의 embedding&lt;/span&gt;&amp;nbsp;결과 예시&lt;span style=&quot;color: #777777;&quot;&gt;&amp;darr;&lt;/span&gt;&lt;span style=&quot;color: #777777;&quot;&gt;&amp;darr;&lt;/span&gt;&lt;span style=&quot;color: #777777;&quot;&gt;&amp;darr;)&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;519&quot; data-origin-height=&quot;254&quot; data-filename=&quot;그림4.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/LEUHD/btrghPqidi1/oEFlnRXKkvvh3i8pvipRMK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/LEUHD/btrghPqidi1/oEFlnRXKkvvh3i8pvipRMK/img.png&quot; data-alt=&quot;&amp;amp;amp;lt;그림 출처:&amp;amp;amp;nbsp; https://www.youtube.com/watch?v=rNh2CrTFpm4&amp;amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/LEUHD/btrghPqidi1/oEFlnRXKkvvh3i8pvipRMK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FLEUHD%2FbtrghPqidi1%2FoEFlnRXKkvvh3i8pvipRMK%2Fimg.png&quot; data-origin-width=&quot;519&quot; data-origin-height=&quot;254&quot; data-filename=&quot;그림4.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처:&amp;nbsp; https://www.youtube.com/watch?v=rNh2CrTFpm4&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span&gt;최초의 CNN 모델인 LeNet(1998)의 창시자 &lt;b&gt;Yann LeCun 교수&lt;/b&gt; 또한 평소에 &lt;b&gt;unsupervised learning&lt;/b&gt; 방식을 매우 &lt;b&gt;강조&lt;/b&gt;해왔습니다. Yann LeCun 교수는 &lt;b&gt;NIPS 2016 Key Note&lt;/b&gt;에서는 &lt;b&gt;아래 PPT&lt;/b&gt; 내용을 발표하였고,&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;952&quot; data-origin-height=&quot;609&quot; data-filename=&quot;그림7.png&quot; width=&quot;701&quot; height=&quot;448&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/q1ZyT/btrgjzs5brh/rkFNVq6wKNqjlAc3HJOJ5k/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/q1ZyT/btrgjzs5brh/rkFNVq6wKNqjlAc3HJOJ5k/img.png&quot; data-alt=&quot;&amp;amp;amp;lt;그림 출처:&amp;amp;amp;nbsp; https://medium.com/syncedreview/yann-lecun-cake-analogy-2-0-a361da560dae&amp;amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/q1ZyT/btrgjzs5brh/rkFNVq6wKNqjlAc3HJOJ5k/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fq1ZyT%2Fbtrgjzs5brh%2FrkFNVq6wKNqjlAc3HJOJ5k%2Fimg.png&quot; data-origin-width=&quot;952&quot; data-origin-height=&quot;609&quot; data-filename=&quot;그림7.png&quot; width=&quot;701&quot; height=&quot;448&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처:&amp;nbsp; https://medium.com/syncedreview/yann-lecun-cake-analogy-2-0-a361da560dae&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;같은 해(&lt;b&gt;2016&lt;/b&gt;) &lt;b&gt;CMU&lt;/b&gt;에서 특히 이를 강조하기도 했죠.&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;figure data-ke-type=&quot;video&quot; data-ke-style=&quot;alignCenter&quot; data-video-host=&quot;youtube&quot; data-video-url=&quot;https://www.youtube.com/watch?v=Bmq9Yyx_u-s&quot; data-video-thumbnail=&quot;https://scrap.kakaocdn.net/dn/J2twy/hyLNjEKw7c/jJCyk2uiaKeD8kAg7MrBEk/img.jpg?width=1280&amp;amp;height=720&amp;amp;face=728_98_864_246&quot; data-video-width=&quot;860&quot; data-video-height=&quot;484&quot; data-video-origin-width=&quot;860&quot; data-video-origin-height=&quot;484&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;iframe src=&quot;https://www.youtube.com/embed/Bmq9Yyx_u-s&quot; width=&quot;860&quot; height=&quot;484&quot; frameborder=&quot;&quot; allowfullscreen=&quot;true&quot;&gt;&lt;/iframe&gt;
&lt;figcaption&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;802&quot; data-origin-height=&quot;791&quot; data-filename=&quot;그림6.png&quot; width=&quot;655&quot; height=&quot;646&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/c1hClF/btrgoJVKYDu/Jmh9rukc6AgpCVEYJpzD41/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/c1hClF/btrgoJVKYDu/Jmh9rukc6AgpCVEYJpzD41/img.png&quot; data-alt=&quot;&amp;amp;amp;lt;그림 출처:&amp;amp;amp;nbsp; https://twitter.com/mldcmu/status/1046869963347283973&amp;amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/c1hClF/btrgoJVKYDu/Jmh9rukc6AgpCVEYJpzD41/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fc1hClF%2FbtrgoJVKYDu%2FJmh9rukc6AgpCVEYJpzD41%2Fimg.png&quot; data-origin-width=&quot;802&quot; data-origin-height=&quot;791&quot; data-filename=&quot;그림6.png&quot; width=&quot;655&quot; height=&quot;646&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처:&amp;nbsp; https://twitter.com/mldcmu/status/1046869963347283973&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;&lt;span&gt;2. Yann Lecun &amp;amp; Self-Supervised Learning&lt;/span&gt;&lt;/h3&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: AppleSDGothicNeo-Regular, 'Malgun Gothic', '맑은 고딕', dotum, 돋움, sans-serif;&quot;&gt;[2-1. 2018]&lt;/span&gt;&lt;/b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;/span&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;2018.09.12 Samsung AI Forum (SAIF)&lt;/b&gt; 에서 &lt;b&gt;Yan Lecun 교수&lt;/b&gt;는&lt;b&gt; self-supervised learning&lt;/b&gt;이라는&lt;b&gt; 용어&lt;/b&gt;를 &lt;b&gt;사용&lt;/b&gt;하게 됩니다. (물론 이전에도 사용해 왔을 수 도 있습니다) Yan Lecun은 supervised learning의 단점, reinforcement learning의 단점을 통해 self-supervised learning의 필요성을 주장했습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Supervised learning의 단점
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&quot;레이블&quot; 데이터가 많아야 함&lt;/li&gt;
&lt;li&gt;학습되지 않은 데이터가 튀어나오면 예측 불가능&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Reinforcement learning의 단점
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;실패해도 다시 시도하면 되는 게임에선 가능&lt;/li&gt;
&lt;li&gt;현실 세계에서는 실패 자체가 치명적일 수 있어서 적용하기 힘듬&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Self-supervised learning의 필요성
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;주변 상황과 조건을 고려해 예측해야 함&lt;/li&gt;
&lt;li&gt;실패하기 전에 사고가 난다는 것을 예측해야 함&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1234&quot; data-origin-height=&quot;812&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/wdbMM/btrgh9oxIDD/FGl5reMnDZXAUIxMtdVizK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/wdbMM/btrgh9oxIDD/FGl5reMnDZXAUIxMtdVizK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/wdbMM/btrgh9oxIDD/FGl5reMnDZXAUIxMtdVizK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FwdbMM%2Fbtrgh9oxIDD%2FFGl5reMnDZXAUIxMtdVizK%2Fimg.png&quot; data-origin-width=&quot;1234&quot; data-origin-height=&quot;812&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;(&amp;darr;&amp;darr;&amp;darr; Yan Lecun이 self-supervised learning을 언급했다는 내용&amp;darr;&amp;darr;&amp;darr;)&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://news.samsung.com/kr/%EC%84%B8%EA%B3%84-%EC%84%9D%ED%95%99%EB%93%A4%EC%9D%98-%EB%88%88%EC%9C%BC%EB%A1%9C-%EB%B3%B8-ai%EC%9D%98-%EB%AF%B8%EB%9E%98%EC%82%BC%EC%84%B1-ai-%ED%8F%AC%EB%9F%BC-2018&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://news.samsung.com/kr/%EC%84%B8%EA%B3%84-%EC%84%9D%ED%95%99%EB%93%A4%EC%9D%98-%EB%88%88%EC%9C%BC%EB%A1%9C-%EB%B3%B8-ai%EC%9D%98-%EB%AF%B8%EB%9E%98%EC%82%BC%EC%84%B1-ai-%ED%8F%AC%EB%9F%BC-2018&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1632898328078&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;website&quot; data-og-title=&quot;세계 석학들의 눈으로 본 AI의 미래&amp;hellip;&amp;lsquo;삼성 AI 포럼 2018&amp;rsquo;&quot; data-og-description=&quot;인공지능(AI)과 딥러닝(Deep Learning) 분야 최고 권위자들이 한국을 찾았다. 9월 12~13일 이틀에 걸쳐 열린 &amp;lsquo;삼성 AI 포럼 2018&amp;rsquo;에 연사로 나서기 위해서다. 이들은 AI의 고도화된 학습기법인 &amp;lsquo;자기지&quot; data-og-host=&quot;news.samsung.com&quot; data-og-source-url=&quot;https://news.samsung.com/kr/%EC%84%B8%EA%B3%84-%EC%84%9D%ED%95%99%EB%93%A4%EC%9D%98-%EB%88%88%EC%9C%BC%EB%A1%9C-%EB%B3%B8-ai%EC%9D%98-%EB%AF%B8%EB%9E%98%EC%82%BC%EC%84%B1-ai-%ED%8F%AC%EB%9F%BC-2018&quot; data-og-url=&quot;https://news.samsung.com/kr/%ec%84%b8%ea%b3%84-%ec%84%9d%ed%95%99%eb%93%a4%ec%9d%98-%eb%88%88%ec%9c%bc%eb%a1%9c-%eb%b3%b8-ai%ec%9d%98-%eb%af%b8%eb%9e%98%ec%82%bc%ec%84%b1-ai-%ed%8f%ac%eb%9f%bc-2018&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/cA7ruQ/hyLLEX2ItL/tmAzHEYl1ihSEDGPzxWNO1/img.jpg?width=720&amp;amp;height=405&amp;amp;face=56_51_682_189,https://scrap.kakaocdn.net/dn/cmUUat/hyLLzoV3PU/tcVtiNZsXMdxv74ObBSiq0/img.jpg?width=720&amp;amp;height=405&amp;amp;face=56_51_682_189,https://scrap.kakaocdn.net/dn/cRR7Yo/hyLLC0gJuR/nMzk5BaQU7V46qntxy7pUk/img.jpg?width=849&amp;amp;height=653&amp;amp;face=172_58_684_470&quot;&gt;&lt;a href=&quot;https://news.samsung.com/kr/%EC%84%B8%EA%B3%84-%EC%84%9D%ED%95%99%EB%93%A4%EC%9D%98-%EB%88%88%EC%9C%BC%EB%A1%9C-%EB%B3%B8-ai%EC%9D%98-%EB%AF%B8%EB%9E%98%EC%82%BC%EC%84%B1-ai-%ED%8F%AC%EB%9F%BC-2018&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://news.samsung.com/kr/%EC%84%B8%EA%B3%84-%EC%84%9D%ED%95%99%EB%93%A4%EC%9D%98-%EB%88%88%EC%9C%BC%EB%A1%9C-%EB%B3%B8-ai%EC%9D%98-%EB%AF%B8%EB%9E%98%EC%82%BC%EC%84%B1-ai-%ED%8F%AC%EB%9F%BC-2018&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/cA7ruQ/hyLLEX2ItL/tmAzHEYl1ihSEDGPzxWNO1/img.jpg?width=720&amp;amp;height=405&amp;amp;face=56_51_682_189,https://scrap.kakaocdn.net/dn/cmUUat/hyLLzoV3PU/tcVtiNZsXMdxv74ObBSiq0/img.jpg?width=720&amp;amp;height=405&amp;amp;face=56_51_682_189,https://scrap.kakaocdn.net/dn/cRR7Yo/hyLLC0gJuR/nMzk5BaQU7V46qntxy7pUk/img.jpg?width=849&amp;amp;height=653&amp;amp;face=172_58_684_470');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;세계 석학들의 눈으로 본 AI의 미래&amp;hellip;&amp;lsquo;삼성 AI 포럼 2018&amp;rsquo;&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;인공지능(AI)과 딥러닝(Deep Learning) 분야 최고 권위자들이 한국을 찾았다. 9월 12~13일 이틀에 걸쳐 열린 &amp;lsquo;삼성 AI 포럼 2018&amp;rsquo;에 연사로 나서기 위해서다. 이들은 AI의 고도화된 학습기법인 &amp;lsquo;자기지&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;news.samsung.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;a href=&quot;https://www.sait.samsung.co.kr/saithome/event/saif2018.do&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://www.sait.samsung.co.kr/saithome/event/saif2018.do&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1632897493019&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;website&quot; data-og-title=&quot;Samsung AI Forum 2018 | Samsung Advanced Institute of Technology&quot; data-og-description=&quot;Samsung AI Forum 2018 | Samsung Advanced Institute of Technology&quot; data-og-host=&quot;www.sait.samsung.co.kr&quot; data-og-source-url=&quot;https://www.sait.samsung.co.kr/saithome/event/saif2018.do&quot; data-og-url=&quot;https://www.sait.samsung.co.kr/saithome/event/saif2018.do&quot; data-og-image=&quot;&quot;&gt;&lt;a href=&quot;https://www.sait.samsung.co.kr/saithome/event/saif2018.do&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://www.sait.samsung.co.kr/saithome/event/saif2018.do&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url();&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;Samsung AI Forum 2018 | Samsung Advanced Institute of Technology&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;Samsung AI Forum 2018 | Samsung Advanced Institute of Technology&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;www.sait.samsung.co.kr&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: AppleSDGothicNeo-Regular, 'Malgun Gothic', '맑은 고딕', dotum, 돋움, sans-serif;&quot;&gt;[2-2. 2019]&lt;/span&gt;&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;2018년도 까지만 해도 &lt;span style=&quot;color: #000000;&quot;&gt;self-supervised learning&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;과 관련된 연구들이 진행되고 있었지만&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;, unsupervised learning&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;이라는 용어와 별도로 분리해서 사용하지 않는 등 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;ldquo;self-supervised learning&amp;rdquo;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt; 용어가 보편화 되지 않았던 것 같습니다&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;하지만&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;, Yann LeCun 교수는 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;self-supervised learning&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;이라는 개념이 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;unsupervised learning&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt; 용어와 분별하여 사용할 필요성을 느끼고 &lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;2019&lt;/span&gt;&lt;/b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;년&lt;span&gt; 트위터에 &lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;ldquo;self-supervised learning&amp;rdquo;&lt;/span&gt;&lt;/b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;이라는 &lt;b&gt;개념&lt;/b&gt;을 &lt;b&gt;구체화&lt;/b&gt; 하기 시작합니다&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;689&quot; data-origin-height=&quot;380&quot; data-filename=&quot;그림9.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/qmwfZ/btrgoYSTEBh/p6ZfxJM5LWLSnO8Qcn4QqK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/qmwfZ/btrgoYSTEBh/p6ZfxJM5LWLSnO8Qcn4QqK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/qmwfZ/btrgoYSTEBh/p6ZfxJM5LWLSnO8Qcn4QqK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FqmwfZ%2FbtrgoYSTEBh%2Fp6ZfxJM5LWLSnO8Qcn4QqK%2Fimg.png&quot; data-origin-width=&quot;689&quot; data-origin-height=&quot;380&quot; data-filename=&quot;그림9.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1002&quot; data-origin-height=&quot;302&quot; data-filename=&quot;그림10.jpg&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/VP8vT/btrgnqiklrk/2Q11o2cxUbFklrnHaxZ2JK/img.jpg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/VP8vT/btrgnqiklrk/2Q11o2cxUbFklrnHaxZ2JK/img.jpg&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/VP8vT/btrgnqiklrk/2Q11o2cxUbFklrnHaxZ2JK/img.jpg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FVP8vT%2Fbtrgnqiklrk%2F2Q11o2cxUbFklrnHaxZ2JK%2Fimg.jpg&quot; data-origin-width=&quot;1002&quot; data-origin-height=&quot;302&quot; data-filename=&quot;그림10.jpg&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;(&amp;darr;&amp;darr;&amp;darr;Representation learning용어는 뒤에서도 나옴&amp;darr;&amp;darr;&amp;darr;)&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;591&quot; data-origin-height=&quot;371&quot; data-filename=&quot;그림21.jpg&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/kHqPn/btrgopQ8jQt/9voEasiCZmNIaP9kTzZdwk/img.jpg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/kHqPn/btrgopQ8jQt/9voEasiCZmNIaP9kTzZdwk/img.jpg&quot; data-alt=&quot;&amp;amp;amp;lt;그림 출처: https://chowdera.com/2021/01/20210109003603375e.html&amp;amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/kHqPn/btrgopQ8jQt/9voEasiCZmNIaP9kTzZdwk/img.jpg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FkHqPn%2FbtrgopQ8jQt%2F9voEasiCZmNIaP9kTzZdwk%2Fimg.jpg&quot; data-origin-width=&quot;591&quot; data-origin-height=&quot;371&quot; data-filename=&quot;그림21.jpg&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처: https://chowdera.com/2021/01/20210109003603375e.html&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: AppleSDGothicNeo-Regular, 'Malgun Gothic', '맑은 고딕', dotum, 돋움, sans-serif;&quot;&gt;[2-3. 2020]&lt;/span&gt;&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;b&gt;2020.11.05 Samsung AI Forum (SAIF)&lt;/b&gt; &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;에서 &lt;b&gt;Yann &lt;/b&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;b&gt;LeCun 교수&lt;/b&gt;는&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;nbsp;다시 &lt;/span&gt;&lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;self-supervised learning&lt;/span&gt;&lt;/b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;을 &lt;b&gt;강조&lt;/b&gt;했습니다&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;. &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;좀 더 구체적인 내용들을 토대로 연설을 시작하면서 왜 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;self-supervised learning&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;이 필요한지 설명했습니다.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;767&quot; data-origin-height=&quot;420&quot; data-filename=&quot;그림11.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bSrQup/btrgim82elA/km3hSbPcs3dtqXj0KkjMxK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bSrQup/btrgim82elA/km3hSbPcs3dtqXj0KkjMxK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bSrQup/btrgim82elA/km3hSbPcs3dtqXj0KkjMxK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbSrQup%2Fbtrgim82elA%2Fkm3hSbPcs3dtqXj0KkjMxK%2Fimg.png&quot; data-origin-width=&quot;767&quot; data-origin-height=&quot;420&quot; data-filename=&quot;그림11.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;figure data-ke-type=&quot;video&quot; data-ke-style=&quot;alignCenter&quot; data-video-host=&quot;youtube&quot; data-video-url=&quot;https://www.youtube.com/watch?v=BqgnnrojVBI&quot; data-video-thumbnail=&quot;https://scrap.kakaocdn.net/dn/csvJVF/hyLNlWTxqP/pqcXNEJUXgvLyxVDWdAkr0/img.jpg?width=1280&amp;amp;height=720&amp;amp;face=300_192_512_424&quot; data-video-width=&quot;860&quot; data-video-height=&quot;484&quot; data-video-origin-width=&quot;860&quot; data-video-origin-height=&quot;484&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;iframe src=&quot;https://www.youtube.com/embed/BqgnnrojVBI&quot; width=&quot;860&quot; height=&quot;484&quot; frameborder=&quot;&quot; allowfullscreen=&quot;true&quot;&gt;&lt;/iframe&gt;
&lt;figcaption&gt;&amp;lt;그림 출처: [SAIF 2020] Day 1: Energy-Based Models for Self-Supervised Learning - Yann LeCun | Samsung&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;3. Self-Supervised Learning (SSL) Motivation&lt;/h3&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;Self-Supervised Learnig&lt;/b&gt;을 배우는&lt;b&gt; 이유&lt;/b&gt;는 다양하겠지만, &lt;b&gt;이번 글&lt;/b&gt;에서는 대표적인 &lt;b&gt;한 가지&lt;/b&gt; 이유에 대해서만 말씀드리겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;Vision 분야&lt;/b&gt;에서 딥러닝을 가장 흔하게 사용하는 방법 중 하나는&lt;b&gt; pre-trained model을 transfer learning&lt;/b&gt; 하여 사용하는 것입니다. &lt;/span&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;Transfer learning을 하는 이유는 &lt;b&gt;ImageNet&lt;/b&gt;과 같이 방대한 양의 데이터를 &lt;b&gt;미리 학습&lt;/b&gt;하여 다양한 이미지들에 대한 feature를 잘 뽑을 수 있게 filter들을 학습시키고, &lt;b&gt;특정 task에 적용&lt;/b&gt;하는 것이죠. &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;보통 &lt;b&gt;초기 layer&lt;/b&gt;들은 edge와 같은 특징들을 뽑아낼 수 있게 학습이 될텐데, &lt;b&gt;edge feature&lt;/b&gt;를 뽑아 줄 수 있는 filter들을 형성하는데 있어서 &lt;b&gt;이미지 종류는 크게 상관이 없을&lt;/b&gt; 확률이 높습니다. 왜냐하면, 강아지의 edge 특징이나 자동차의 edge 특징이나 거기서 거기일 가능성이 크기 때문이죠. 하지만, &lt;b&gt;마지막 layer&lt;/b&gt;에서 추출할 수 있는 &lt;b&gt;semantic 정보&lt;/b&gt;는 &lt;b&gt;이미지 종류마다 다를&lt;/b&gt; 수 있습니다. 그래서, 우리가 최종적으로 분류할 task의 이미지 종류가 pre-training 시에 사용됐던 이미지 종류와 다르다면 &lt;b&gt;마지막 layer 부분을 다시 학습(by transfer learning or fine-tuning)&lt;/b&gt;하여 최종 task 이미지의 semantic 정보를 적절하게 추출할 수 있도록 setting 해주는 것이죠.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;777&quot; data-origin-height=&quot;490&quot; data-filename=&quot;그림12.png&quot; width=&quot;622&quot; height=&quot;392&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cP1gfV/btrgjzfCRMP/lnYpMPKnBakjKdyHthxHN1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cP1gfV/btrgjzfCRMP/lnYpMPKnBakjKdyHthxHN1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cP1gfV/btrgjzfCRMP/lnYpMPKnBakjKdyHthxHN1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcP1gfV%2FbtrgjzfCRMP%2FlnYpMPKnBakjKdyHthxHN1%2Fimg.png&quot; data-origin-width=&quot;777&quot; data-origin-height=&quot;490&quot; data-filename=&quot;그림12.png&quot; width=&quot;622&quot; height=&quot;392&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;여기서 중요하게 알고 넘어가야할 용어가 있습니다. 바로 &lt;b&gt;&quot;upstream task&quot;&lt;/b&gt;와 &lt;b&gt;&quot;downstream task&quot;&lt;/b&gt;입니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;upstream task&lt;/b&gt;는 &lt;b&gt;pre-training 단계&lt;/b&gt;에서 진행하는 &lt;b&gt;학습 task&lt;/b&gt;를 의미하고, &lt;b&gt;downstream task&lt;/b&gt;는 &lt;b&gt;transfer learning 시&lt;/b&gt;에 적용하고자 하는 &lt;b&gt;target task&lt;/b&gt;를 의미합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그렇다면, &lt;b&gt;transfer learning 관점에서 왜 self-supervised learning이 필요할까요?&lt;/b&gt; 이 질문에 답을 하기 위해서는 기존의 방식들에 대한 의문을 먼저 던져봐야합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;&lt;i&gt;&lt;span style=&quot;color: #000000; font-family: 'Noto Sans Light';&quot;&gt;&amp;ldquo;Why should we use supervised learning method for pre-training model?&amp;rdquo;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;&lt;span style=&quot;color: #000000;&quot;&gt;Supervised learning&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;학습 방식이&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;downstream task&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;에 효과적인&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;feature&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;를 학습하는데 도움이 되는가&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;?&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;Pre-trained&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;model&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;을 학습하는데 대용량의&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;label&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;정보가 필요한가&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;?&lt;/span&gt;&lt;/span&gt;&lt;span&gt;&lt;span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;color: #000000; font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;사실 위의 두 질문에서 첫 번째 질문에 대한 답이 &quot;supervised learning 방식은 좋지않다&quot;라면 2번에 대한 답은 자연스럽게 &quot;필요없다&quot;가 됩니다. 즉, 우리는 첫 번째 질문에 대한 고찰만 하면 되는 것이죠.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;color: #000000; font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;그렇다면, &lt;b&gt;1번 질문&lt;/b&gt;을 집중적으로 살펴보면서 &lt;b&gt;self-supervised learnin이 필요한 이유&lt;/b&gt;에 대해서 살펴보도록 하겠습니다.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&quot;Supervised learning&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;학습 방식이&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;downstream task&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;에 효과적인&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;feature&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;를 학습하는데 도움이 되는가&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;?&quot;&lt;/span&gt;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;Mon May 3rd through Fri the 7&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;th&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;2021 ICLR&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;Keynote&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size16&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;Alexei A. &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;Efros&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt; (UC Berkely)&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;544&quot; data-origin-height=&quot;261&quot; data-filename=&quot;그림13.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cTPB96/btrgnr9s70Z/4XXzUCTVLKpqV14ITCJ1U1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cTPB96/btrgnr9s70Z/4XXzUCTVLKpqV14ITCJ1U1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cTPB96/btrgnr9s70Z/4XXzUCTVLKpqV14ITCJ1U1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcTPB96%2Fbtrgnr9s70Z%2F4XXzUCTVLKpqV14ITCJ1U1%2Fimg.png&quot; data-origin-width=&quot;544&quot; data-origin-height=&quot;261&quot; data-filename=&quot;그림13.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;(&amp;darr;&amp;darr;&amp;darr; 위의 슬라이드 자료 &amp;darr;&amp;darr;&amp;darr;)&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;a href=&quot;https://iclr.cc/media/iclr-2021/Slides/3720.pdf&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://iclr.cc/media/iclr-2021/Slides/3720.pdf&lt;/a&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;(&amp;darr;&amp;darr;&amp;darr; 위의 슬라이드 발표영상 &amp;darr;&amp;darr;&amp;darr;)&lt;/p&gt;
&lt;figure data-ke-type=&quot;video&quot; data-ke-style=&quot;alignCenter&quot; data-video-host=&quot;youtube&quot; data-video-url=&quot;https://www.youtube.com/watch?v=oeHiNGcSLkg&quot; data-video-thumbnail=&quot;https://scrap.kakaocdn.net/dn/brSB01/hyLLHgehZz/zKvJGKAjUtUa8ZUtkuTxA1/img.jpg?width=1280&amp;amp;height=720&amp;amp;face=0_0_1280_720&quot; data-video-width=&quot;860&quot; data-video-height=&quot;484&quot; data-video-origin-width=&quot;860&quot; data-video-origin-height=&quot;484&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;iframe src=&quot;https://www.youtube.com/embed/oeHiNGcSLkg&quot; width=&quot;860&quot; height=&quot;484&quot; frameborder=&quot;&quot; allowfullscreen=&quot;true&quot;&gt;&lt;/iframe&gt;
&lt;figcaption&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light'; color: #ee2323;&quot;&gt;위의 발표내용 중에 제가 문제라고 봤던 부분을 정리해서 설명해보도록 하겠습니다. 여기서 부터는 주관적인 해석이 많이 들어가 있으니 참고해서 봐주시고, 잘 못 됐거나 다른 관점이 있으신 분들은 댓글 남겨주시면 감사하겠습니다.&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;b&gt;기존&lt;/b&gt;의 &lt;b&gt;CNN&lt;/b&gt;이 &lt;b&gt;supervised learning&lt;/b&gt;으로 &lt;b&gt;학습&lt;/b&gt;하는 &lt;b&gt;방식&lt;/b&gt;에 대해 살펴보겠습니다.&lt;b&gt; CNN 분류 학습&lt;/b&gt;의 가장 큰 &lt;b&gt;목표&lt;/b&gt;라고 할 수 있는 것은 &lt;b&gt;아래 다양한 의자들을 모두 동일한 클래스인 &quot;의자&quot;라고 분류&lt;/b&gt;하는 것 입니다. 즉, 의자의 다양한 형태에 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;robust&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;하게 분류 할 줄 알아야 하는데, 이를 위해선&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;의자의 공통적인 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;feature&lt;/span&gt;&lt;/b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;를 잘 뽑아낼 수 있도록 &lt;/span&gt;&lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;Conv filter&lt;/span&gt;&lt;/b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;가 &lt;b&gt;학습&lt;/b&gt;되어야 합니다. 그래서, 학습 시 &lt;b&gt;아래 다양한 의자들을 모두 동일한 labe&lt;/b&gt;l로 설정하게 되는 것이죠.&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;455&quot; data-origin-height=&quot;107&quot; data-filename=&quot;그림16.png&quot; width=&quot;591&quot; height=&quot;139&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/HGINn/btrgkUKSTeN/sGdOctCAH33ltzlZDdRGlk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/HGINn/btrgkUKSTeN/sGdOctCAH33ltzlZDdRGlk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/HGINn/btrgkUKSTeN/sGdOctCAH33ltzlZDdRGlk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FHGINn%2FbtrgkUKSTeN%2FsGdOctCAH33ltzlZDdRGlk%2Fimg.png&quot; data-origin-width=&quot;455&quot; data-origin-height=&quot;107&quot; data-filename=&quot;그림16.png&quot; width=&quot;591&quot; height=&quot;139&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그런데, 잘 생각해보면 사람들이 유아기 때 저 의자를 구분할 수 있었던 이유는 '누군가가 저 모든 형태 하나 하나씩 의자라고 알려주었기 때문'이 &lt;span style=&quot;color: #ee2323;&quot;&gt;아니라&lt;/span&gt; '먼저 각각의 의자의 특성들을 잘 파악하고 서로 유사한지 아닌지 비교해나가기 때문'인 것을 알 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;ldquo;People don&amp;rsquo;t rely on abstract definitions/lists of &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;shared properties&amp;rdquo;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;즉, 애초에 저 '다양한 의자들을 모두 같은 것'이라고 가정하고 출발하는게 '인간이 학습하는 방식'에 맞지 &lt;b&gt;&lt;span style=&quot;color: #ee2323;&quot;&gt;않다&lt;/span&gt;&lt;/b&gt;고 보는 것이죠.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;918&quot; data-origin-height=&quot;847&quot; data-filename=&quot;그림14.png&quot; width=&quot;659&quot; height=&quot;608&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bZPL9r/btrgoYerepf/ddT4ssPXDMKWSbmLnQRQoK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bZPL9r/btrgoYerepf/ddT4ssPXDMKWSbmLnQRQoK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bZPL9r/btrgoYerepf/ddT4ssPXDMKWSbmLnQRQoK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbZPL9r%2FbtrgoYerepf%2FddT4ssPXDMKWSbmLnQRQoK%2Fimg.png&quot; data-origin-width=&quot;918&quot; data-origin-height=&quot;847&quot; data-filename=&quot;그림14.png&quot; width=&quot;659&quot; height=&quot;608&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;위와 같은 문제를 해결하기 위해 딥러닝 모델도 먼저 &lt;b&gt;각각의 의자 이미지들에 대한 특성을 잘 파악할 수 있게 학습시키는게 먼저&lt;/b&gt;라고 봤습니다. &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;즉, 해당 이미지들이 무엇인지를 학습하는 것이 아닌 해당 이미지들이 무엇과 유사한지를 살펴보도록 하는게 인간의 학습관점에 더 맞다고 판단한 것이죠.&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;661&quot; data-origin-height=&quot;395&quot; data-filename=&quot;그림17.png&quot; width=&quot;497&quot; height=&quot;297&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/AuwLT/btrginmJvvf/knHkSGuL3D9skCe4aWCkak/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/AuwLT/btrginmJvvf/knHkSGuL3D9skCe4aWCkak/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/AuwLT/btrginmJvvf/knHkSGuL3D9skCe4aWCkak/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FAuwLT%2FbtrginmJvvf%2FknHkSGuL3D9skCe4aWCkak%2Fimg.png&quot; data-origin-width=&quot;661&quot; data-origin-height=&quot;395&quot; data-filename=&quot;그림17.png&quot; width=&quot;497&quot; height=&quot;297&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;350&quot; data-origin-height=&quot;158&quot; data-filename=&quot;그림18.jpg&quot; width=&quot;439&quot; height=&quot;198&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/KSjJe/btrgpLzb8G2/hBQDzbLzFKOW3rdgJaqkN1/img.jpg&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/KSjJe/btrgpLzb8G2/hBQDzbLzFKOW3rdgJaqkN1/img.jpg&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/KSjJe/btrgpLzb8G2/hBQDzbLzFKOW3rdgJaqkN1/img.jpg&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FKSjJe%2FbtrgpLzb8G2%2FhBQDzbLzFKOW3rdgJaqkN1%2Fimg.jpg&quot; data-origin-width=&quot;350&quot; data-origin-height=&quot;158&quot; data-filename=&quot;그림18.jpg&quot; width=&quot;439&quot; height=&quot;198&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #ff0000;&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;또한,&lt;/span&gt; 다양한 의자의 &lt;/span&gt;&lt;span style=&quot;color: #ff0000;&quot;&gt;feature&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;들을 잘 표현&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;(&lt;/span&gt;&lt;span style=&quot;color: #ff0000;&quot;&gt;representation&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;)&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;해줄 수 있도록 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;Conv filter&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;가 학습&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;(&lt;/span&gt;&lt;span style=&quot;color: #ff0000;&quot;&gt;learning&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;)&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;되는게&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;, downstream task&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;를 위한&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt; representation learning &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;관점에서 더 좋을 수 있다고 봤습니다.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1905&quot; data-origin-height=&quot;853&quot; data-filename=&quot;그림19.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/HFhol/btrgjFHg9bP/m7Hn9vcitXJ0hPKC0agMZk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/HFhol/btrgjFHg9bP/m7Hn9vcitXJ0hPKC0agMZk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/HFhol/btrgjFHg9bP/m7Hn9vcitXJ0hPKC0agMZk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FHFhol%2FbtrgjFHg9bP%2Fm7Hn9vcitXJ0hPKC0agMZk%2Fimg.png&quot; data-origin-width=&quot;1905&quot; data-origin-height=&quot;853&quot; data-filename=&quot;그림19.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;4. 앞으로 볼 내용&amp;nbsp;&lt;/h3&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;Self-Supervised Learning은 아래의 순서대로 발전해 왔습니다. &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그렇기 때문에 앞으로 Self-Supervised Learning 카테고리에서는 Pretext task, Contrastive learning, New approach 이렇게 세 가지 서브 카테고리로 나눠서 설명하도록 하겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;799&quot; data-origin-height=&quot;423&quot; data-filename=&quot;그림20.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/buNtsR/btrgk8PIHmk/7MmzmMhhPnOGNkUW25f350/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/buNtsR/btrgk8PIHmk/7MmzmMhhPnOGNkUW25f350/img.png&quot; data-alt=&quot;&amp;amp;amp;lt;그림 출처:&amp;amp;amp;nbsp; https://www.youtube.com/watch?v=5BCQ7T2Rw1w&amp;amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/buNtsR/btrgk8PIHmk/7MmzmMhhPnOGNkUW25f350/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbuNtsR%2Fbtrgk8PIHmk%2F7MmzmMhhPnOGNkUW25f350%2Fimg.png&quot; data-origin-width=&quot;799&quot; data-origin-height=&quot;423&quot; data-filename=&quot;그림20.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처:&amp;nbsp; https://www.youtube.com/watch?v=5BCQ7T2Rw1w&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>Self-Supervised Learning</category>
      <category>self-supervised learning</category>
      <category>SSL</category>
      <category>Unsupervised learning</category>
      <category>Yann LeCun</category>
      <author>Do-Woo-Ner</author>
      <guid isPermaLink="true">https://89douner.tistory.com/332</guid>
      <comments>https://89douner.tistory.com/332#entry332comment</comments>
      <pubDate>Wed, 29 Sep 2021 16:02:17 +0900</pubDate>
    </item>
    <item>
      <title>5-2.GAN (Part2. Theoretical Results)</title>
      <link>https://89douner.tistory.com/331</link>
      <description>&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;안녕하세요.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;이번 글(Part2)&lt;/b&gt;에서는 지난 Part1에 이어서 &lt;b&gt;GAN의 수학적 증명&lt;/b&gt;과 &lt;b&gt;그 외 나머지 부분&lt;/b&gt;들에 대해 정리하려고 합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(&amp;darr;&lt;span&gt;&amp;darr;&lt;/span&gt;&lt;span&gt;&amp;darr;GAN part1 &lt;span&gt;&amp;darr;&lt;/span&gt;&lt;span&gt;&amp;darr;&lt;/span&gt;&lt;span&gt;&amp;darr;)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;a href=&quot;https://89douner.tistory.com/329?category=908620&quot;&gt;https://89douner.tistory.com/329?category=908620&lt;/a&gt;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1632639985248&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;5-1. GAN (Part1. GAN architecture)&quot; data-og-description=&quot;안녕하세요. 이번 글에서는 최초의 GAN 논문인 &amp;quot;Generative Adversarial Nets&amp;quot;을 리뷰하려고 합니다. 우선, GAN이라는 모델이 설명할 내용이 많다고 판단하여 파트를 두 개로 나누었습니다. Part1에서는 GAN a&quot; data-og-host=&quot;89douner.tistory.com&quot; data-og-source-url=&quot;https://89douner.tistory.com/329?category=908620&quot; data-og-url=&quot;https://89douner.tistory.com/329&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/ru4qd/hyLI1TDDg3/z4WnmE2Nouhmeaxn159xf1/img.png?width=800&amp;amp;height=415&amp;amp;face=0_0_800_415,https://scrap.kakaocdn.net/dn/ukjwD/hyLKeKzA67/iJ4kKHWKKbnpYcyroQJm21/img.png?width=800&amp;amp;height=415&amp;amp;face=0_0_800_415,https://scrap.kakaocdn.net/dn/cMaffe/hyLI1F4XHI/TI8KCipiobbxJDuWe0JMC1/img.png?width=1054&amp;amp;height=539&amp;amp;face=0_0_1054_539&quot;&gt;&lt;a href=&quot;https://89douner.tistory.com/329?category=908620&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://89douner.tistory.com/329?category=908620&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/ru4qd/hyLI1TDDg3/z4WnmE2Nouhmeaxn159xf1/img.png?width=800&amp;amp;height=415&amp;amp;face=0_0_800_415,https://scrap.kakaocdn.net/dn/ukjwD/hyLKeKzA67/iJ4kKHWKKbnpYcyroQJm21/img.png?width=800&amp;amp;height=415&amp;amp;face=0_0_800_415,https://scrap.kakaocdn.net/dn/cMaffe/hyLI1F4XHI/TI8KCipiobbxJDuWe0JMC1/img.png?width=1054&amp;amp;height=539&amp;amp;face=0_0_1054_539');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;5-1. GAN (Part1. GAN architecture)&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;안녕하세요. 이번 글에서는 최초의 GAN 논문인 &quot;Generative Adversarial Nets&quot;을 리뷰하려고 합니다. 우선, GAN이라는 모델이 설명할 내용이 많다고 판단하여 파트를 두 개로 나누었습니다. Part1에서는 GAN a&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;89douner.tistory.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;논문의 section 4, 5, 6 부분&lt;/b&gt;들 마저 다룰 예정인데, 설명의 편의를 위해 'section 5'와 'section 6'의 순서만 바꾸도록 하겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Section 4: &lt;span style=&quot;color: #000000;&quot;&gt;Theoretical Results&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;Section 6: Advantages and disadvantages&lt;/li&gt;
&lt;li&gt;Section 5: Experiments&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;먼저, &lt;b&gt;&quot;section 4: Theoretical Results&quot;&lt;/b&gt;를 &lt;b&gt;리뷰&lt;/b&gt;하기 &lt;b&gt;전&lt;/b&gt;에 &lt;b&gt;딥러닝 모델을 설계할 시 필요한 몇 가지 기준(질문)들&lt;/b&gt;에 대해서 이야기 하려고 합니다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;&lt;span style=&quot;font-family: AppleSDGothicNeo-Regular, 'Malgun Gothic', '맑은 고딕', dotum, 돋움, sans-serif;&quot;&gt;1. 딥러닝 모델 설계를 위한 네 가지 기준(질문)&lt;/span&gt;&lt;/h3&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;딥러닝 모델을 확률 모델로써 설계할 때&lt;/b&gt;에는 &lt;b&gt;네 가지 기준(질문)&lt;/b&gt;을 충족하는지 따져봐야 합니다. (물론 더 많은 기준(질문)이 있겠지만 GAN 논문에서는 크게 &lt;b&gt;네 가지 기준&lt;/b&gt;으로 요약하고 있습니다. &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(뒤에서 더 자세히 설명하겠지만, 실제 neural network가 사용되면 아래 기준을 이상적으로 만족하는게 불가능하다고 합니다. 그러니, 아래 기준들은 딥러닝 설계 시 이상적인 guideline 정도로 보시면 좋을 것 같습니다.)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;ol style=&quot;list-style-type: decimal;&quot; data-ke-list-type=&quot;decimal&quot;&gt;
&lt;li&gt;설계한 확률(딥러닝)모델이 tractable 한 것인가? (계산 시스템 관점에서 충분히 다룰 수 있는 알고리즘인가?)&lt;/li&gt;
&lt;li&gt;설계한 확률(딥러닝)모델이 최적해(global optimum)를 갖는가?&lt;/li&gt;
&lt;li&gt;설계한 확률(딥러닝)모델이 최적해(global optimum)에 수렴하는가?&lt;/li&gt;
&lt;li&gt;설계한 확률(딥러닝)모델의 최적해(global optimum)을 찾을 수 있는 알고리즘이 존재하는가?&lt;/li&gt;
&lt;/ol&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;먼저,&lt;b&gt; 첫 번째 질문&lt;/b&gt;인 &lt;b&gt;tractable&lt;/b&gt;과 관련한 답은 이미 &lt;b&gt;part1&lt;/b&gt;에서 했기 때문에 &lt;b&gt;아래 링크&lt;/b&gt;를 참고해주시면 좋을 것 같습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(&amp;darr;&lt;span&gt;&amp;darr;&lt;/span&gt;&lt;span&gt;&amp;darr;&lt;/span&gt;아래 링크에서 &lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;ldquo;3.Adverarial nets&amp;rdquo; &amp;rarr;&lt;/span&gt;&lt;/b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;b&gt; &quot;[3-1-1-. First paragraph &amp;amp; Second sentence]&quot;&lt;/b&gt; 부분 참고&lt;span&gt;&amp;darr;&lt;/span&gt;&lt;span&gt;&amp;darr;&lt;/span&gt;&lt;span&gt;&amp;darr;&lt;/span&gt;)&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;a href=&quot;https://89douner.tistory.com/329&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://89douner.tistory.com/329&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1632645994231&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;5-1. GAN (Part1. GAN architecture)&quot; data-og-description=&quot;안녕하세요. 이번 글에서는 최초의 GAN 논문인 &amp;quot;Generative Adversarial Nets&amp;quot;을 리뷰하려고 합니다. 우선, GAN이라는 모델이 설명할 내용이 많다고 판단하여 파트를 두 개로 나누었습니다. Part1에서는 GAN a&quot; data-og-host=&quot;89douner.tistory.com&quot; data-og-source-url=&quot;https://89douner.tistory.com/329&quot; data-og-url=&quot;https://89douner.tistory.com/329&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/prHrX/hyLKsWriev/PyA1dZSspp2NDlTOo6NXVk/img.png?width=800&amp;amp;height=415&amp;amp;face=0_0_800_415,https://scrap.kakaocdn.net/dn/chPg6k/hyLKmWcTXp/r8NKHMxSdBRYgFJswPbFTk/img.png?width=800&amp;amp;height=415&amp;amp;face=0_0_800_415,https://scrap.kakaocdn.net/dn/OXhDD/hyLKkxknGy/J2mejv7fSh1BmM3ZTeV8Jk/img.png?width=1054&amp;amp;height=539&amp;amp;face=0_0_1054_539&quot;&gt;&lt;a href=&quot;https://89douner.tistory.com/329&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://89douner.tistory.com/329&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/prHrX/hyLKsWriev/PyA1dZSspp2NDlTOo6NXVk/img.png?width=800&amp;amp;height=415&amp;amp;face=0_0_800_415,https://scrap.kakaocdn.net/dn/chPg6k/hyLKmWcTXp/r8NKHMxSdBRYgFJswPbFTk/img.png?width=800&amp;amp;height=415&amp;amp;face=0_0_800_415,https://scrap.kakaocdn.net/dn/OXhDD/hyLKkxknGy/J2mejv7fSh1BmM3ZTeV8Jk/img.png?width=1054&amp;amp;height=539&amp;amp;face=0_0_1054_539');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;5-1. GAN (Part1. GAN architecture)&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;안녕하세요. 이번 글에서는 최초의 GAN 논문인 &quot;Generative Adversarial Nets&quot;을 리뷰하려고 합니다. 우선, GAN이라는 모델이 설명할 내용이 많다고 판단하여 파트를 두 개로 나누었습니다. Part1에서는 GAN a&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;89douner.tistory.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;나머지 질문들은 &quot;4. Theoretical Results&quot;를 리뷰하면서 설명해보도록 하겠습니다.&lt;/span&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1712&quot; data-origin-height=&quot;653&quot; data-filename=&quot;그림2.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dlmRck/btrfXxCt78H/BEvKuEi0f9LJpizcjsYtP0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dlmRck/btrfXxCt78H/BEvKuEi0f9LJpizcjsYtP0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dlmRck/btrfXxCt78H/BEvKuEi0f9LJpizcjsYtP0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdlmRck%2FbtrfXxCt78H%2FBEvKuEi0f9LJpizcjsYtP0%2Fimg.png&quot; data-origin-width=&quot;1712&quot; data-origin-height=&quot;653&quot; data-filename=&quot;그림2.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;4. Theoretical Results&lt;/h3&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;[ 4-1. Global optimality of \(p_{g}=p_{data}\) ]&lt;/b&gt;&lt;/h4&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;&lt;i&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;설계한 확률(딥러닝)모델이 최적해(global optimum)를 갖는가?&quot;&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;b&gt;딥러닝&lt;/b&gt;에서&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;b&gt; 분류(classification)&lt;/b&gt;하는 문제에서는 &lt;b&gt;최종 &lt;/b&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;b&gt;loss&lt;/b&gt;인 &lt;b&gt;cross-entropy&lt;/b&gt;&lt;/span&gt;&lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;값이 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;0&lt;/span&gt;&lt;/b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;b&gt;에 도달하기를 바랍&lt;/b&gt;니다.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;765&quot; data-origin-height=&quot;416&quot; data-filename=&quot;그림4.png&quot; width=&quot;388&quot; height=&quot;211&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/X6FXY/btrfXyakC1A/c8zvFt7ZrKvqvelpGU1OSK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/X6FXY/btrfXyakC1A/c8zvFt7ZrKvqvelpGU1OSK/img.png&quot; data-alt=&quot;&amp;amp;amp;lt;그림 출처: https://towardsdatascience.com/cross-entropy-loss-function-f38c4ec8643e&amp;amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/X6FXY/btrfXyakC1A/c8zvFt7ZrKvqvelpGU1OSK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FX6FXY%2FbtrfXyakC1A%2Fc8zvFt7ZrKvqvelpGU1OSK%2Fimg.png&quot; data-origin-width=&quot;765&quot; data-origin-height=&quot;416&quot; data-filename=&quot;그림4.png&quot; width=&quot;388&quot; height=&quot;211&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처: https://towardsdatascience.com/cross-entropy-loss-function-f38c4ec8643e&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;638&quot; data-origin-height=&quot;480&quot; data-filename=&quot;그림3.png&quot; width=&quot;405&quot; height=&quot;305&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/7NkYd/btrfV9uTxpj/EpAvPySHiggmwin8VwHJyk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/7NkYd/btrfV9uTxpj/EpAvPySHiggmwin8VwHJyk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/7NkYd/btrfV9uTxpj/EpAvPySHiggmwin8VwHJyk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F7NkYd%2FbtrfV9uTxpj%2FEpAvPySHiggmwin8VwHJyk%2Fimg.png&quot; data-origin-width=&quot;638&quot; data-origin-height=&quot;480&quot; data-filename=&quot;그림3.png&quot; width=&quot;405&quot; height=&quot;305&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;그렇다면, GAN의 최종 loss는 어떠한 값을 갖어야 global optimum이라고 할 수 있을까요?&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;즉, 최종적으로 도달하고자 하는 loss 값은 무엇일까요?&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;지금부터 이에 대한 답을 해보도록 하겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;887&quot; data-origin-height=&quot;58&quot; data-filename=&quot;제목 없음.png&quot; width=&quot;706&quot; height=&quot;46&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/QnBti/btrf3xhbvM1/UfxqHBKnbxnwHnsqhzz7TK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/QnBti/btrf3xhbvM1/UfxqHBKnbxnwHnsqhzz7TK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/QnBti/btrf3xhbvM1/UfxqHBKnbxnwHnsqhzz7TK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FQnBti%2Fbtrf3xhbvM1%2FUfxqHBKnbxnwHnsqhzz7TK%2Fimg.png&quot; data-origin-width=&quot;887&quot; data-origin-height=&quot;58&quot; data-filename=&quot;제목 없음.png&quot; width=&quot;706&quot; height=&quot;46&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;Generator&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt; &lt;/span&gt;&lt;/b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;b&gt;관점&lt;/b&gt;에서 &lt;/span&gt;&lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;MinMax&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt; problem (value function)&lt;/span&gt;&lt;/b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;은 \(P_{g}=P_{data}\) &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;에서 &lt;/span&gt;&lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;global optimum &lt;/span&gt;&lt;/b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;값을 갖아야 합니다&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;. &lt;b&gt;직관적&lt;/b&gt;으로 봤을 때,&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;Generator&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;가 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;Data&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;와 유사한 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;distribution&lt;/span&gt;&lt;/b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;b&gt;을 형성하는 지점&lt;/b&gt;인 곳에서 &lt;/span&gt;&lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;global optimum &lt;/span&gt;&lt;/b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;값을 찾게 되면 학습을 중단하겠죠&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;?&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;이 논문에서는 &lt;b&gt;optimal discriminator의 global optimum&lt;/b&gt; 값은 &lt;b&gt;아래와 같은 수식에 의해 도출&lt;/b&gt; 된다고 합니다.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;\(D^{*}(x)=\frac{P_{data}(x)}{P_{data}(x)+P_{g}(x)}\)&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;color: #000000; font-family: 'Noto Sans Light';&quot;&gt;위 수식의 직관적인 설명은 아래와 같습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;\(P_{g}=P_{data}\)&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt; 일 경우 \(D^{*}(x)=\frac{1}{2}\) 값을 갖는데, 이것이 의미하는 바는 &lt;b&gt;discriminator가 가짜 이미지와 진짜 이미지를 판별할 확률이 모두&lt;/b&gt; \(\frac{1}{2}\)라는 사실과 값습니다. 즉, &lt;b&gt;가짜와 진짜를 완벽히 혼동한 상태&lt;/b&gt;인 것이죠. 그러므로, GAN loss function인 V(D,G) 값이 &lt;span style=&quot;color: #000000;&quot;&gt;\(\frac{1}{2}\)에 가까이 도달하면 학습을 종료하면 됩니다. (CNN loss인 Cross entropy 에서는 0값에 도달하면 보통 학습을 중단시키죠)&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;그럼, 이제부터 어떻게 optimal discriminator 수식이 V(D,G) 수식에서 &lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;\(D^{*}(x)=\frac{P_{data}(x)}{P_{data}(x)+P_{g}(x)}\) 수식으로 유도되는지 수학적으로 살펴볼까요?&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(유도과정과 관련된 내용은 아래 영상 내용을 기반으로 작성했으니 참고해주세요.)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=odpjk7_tGY0&quot;&gt;https://www.youtube.com/watch?v=odpjk7_tGY0&lt;/a&gt;&amp;nbsp;&lt;/p&gt;
&lt;figure data-ke-type=&quot;video&quot; data-ke-style=&quot;alignCenter&quot; data-video-host=&quot;youtube&quot; data-video-url=&quot;https://www.youtube.com/watch?v=odpjk7_tGY0&quot; data-video-thumbnail=&quot;https://scrap.kakaocdn.net/dn/bc1zjw/hyLKrcTzZl/FuzgSRH2sv7z7mk5YIJ290/img.jpg?width=1280&amp;amp;height=720&amp;amp;face=0_0_1280_720&quot; data-video-width=&quot;860&quot; data-video-height=&quot;484&quot; data-video-origin-width=&quot;860&quot; data-video-origin-height=&quot;484&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;iframe src=&quot;https://www.youtube.com/embed/odpjk7_tGY0&quot; width=&quot;860&quot; height=&quot;484&quot; frameborder=&quot;&quot; allowfullscreen=&quot;true&quot;&gt;&lt;/iframe&gt;
&lt;figcaption&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;807&quot; data-origin-height=&quot;150&quot; data-filename=&quot;그림2.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/6lKap/btrf1h0zG3V/1NpX9FOJNJtG4ELZkhB6zK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/6lKap/btrf1h0zG3V/1NpX9FOJNJtG4ELZkhB6zK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/6lKap/btrf1h0zG3V/1NpX9FOJNJtG4ELZkhB6zK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F6lKap%2Fbtrf1h0zG3V%2F1NpX9FOJNJtG4ELZkhB6zK%2Fimg.png&quot; data-origin-width=&quot;807&quot; data-origin-height=&quot;150&quot; data-filename=&quot;그림2.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1114&quot; data-origin-height=&quot;195&quot; data-filename=&quot;그림3.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cDZsXx/btrfV9u6r73/fPx6GrTpQkKcnEcxLAMSZK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cDZsXx/btrfV9u6r73/fPx6GrTpQkKcnEcxLAMSZK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cDZsXx/btrfV9u6r73/fPx6GrTpQkKcnEcxLAMSZK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcDZsXx%2FbtrfV9u6r73%2FfPx6GrTpQkKcnEcxLAMSZK%2Fimg.png&quot; data-origin-width=&quot;1114&quot; data-origin-height=&quot;195&quot; data-filename=&quot;그림3.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;Optimal&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt; discriminator 상태에 도달했다는 것은&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;Generator&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;관점에서 봤을 때 이미 \(P_{data}=P_{g}\) &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;상태로 학습이 거의 다 됐다는 것을 의미합니다.&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;즉&lt;/span&gt;&lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt; G&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;가 이미 최상의 상태에 도달했다고 가정하여 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;G&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;를 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;fix&lt;/span&gt;&lt;/b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;시키고 &lt;/span&gt;&lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;optimal discriminator D*(x)&lt;/span&gt;&lt;/b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;b&gt;에 대한 수식만&lt;/b&gt; 찾는 것이죠&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;color: #000000; font-family: 'Noto Sans Light';&quot;&gt;위와 같은 사실을 봤을 때, &lt;b&gt;GAN&lt;/b&gt;이라는 모델은 generator와 관련된 loss가 아닌 &lt;b&gt;discriminator와 관련된 loss에 의해 학습을 종료할지 말지 결정&lt;/b&gt;하게 되는 것이죠. 바꿔 말하자면, &lt;b&gt;optimal discriminator의 값을 찾기 위해 discriminator(D) 부분만 고려&lt;/b&gt;하겠다는 것입니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1775&quot; data-origin-height=&quot;677&quot; data-filename=&quot;그림4.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/OTbkn/btrf3tfjG1o/nYqwbXKEAClM46ts5a2NW1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/OTbkn/btrf3tfjG1o/nYqwbXKEAClM46ts5a2NW1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/OTbkn/btrf3tfjG1o/nYqwbXKEAClM46ts5a2NW1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FOTbkn%2Fbtrf3tfjG1o%2FnYqwbXKEAClM46ts5a2NW1%2Fimg.png&quot; data-origin-width=&quot;1775&quot; data-origin-height=&quot;677&quot; data-filename=&quot;그림4.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;color: #000000; font-family: 'Noto Sans Light';&quot;&gt;위 수식을 보면 G(z)를 x로 바꿔 주면서 \(E_{z\sim ~p_{z}(z)}\) 수식부분이&amp;nbsp;&lt;span style=&quot;color: #000000;&quot;&gt;\(E_{x\sim ~p_{g}(x)}\) 로 변경된 것을 확인할 수 있습니다. 이것이 가능한 이유는 &lt;b&gt;G(z)의 차원이 곧 X차원과 동일&lt;/b&gt;하여 &lt;b&gt;G(z)에서 발생한 값을 X 차원 상의 하나의 x값으로 표현(=mapping)&lt;/b&gt;할 수 있기 때문입니다.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;&lt;span style=&quot;color: #000000;&quot;&gt;P&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;z&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;(z): low dimension distribution,&lt;span&gt;&amp;nbsp;&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;z = 100&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;P&lt;span style=&quot;color: #000000;&quot;&gt;g&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;(x): high dimension distribution, x = 64x64&amp;nbsp;&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;987&quot; data-origin-height=&quot;706&quot; data-filename=&quot;그림5.png&quot; width=&quot;559&quot; height=&quot;400&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bwIJkL/btrfVtA9IG5/vKDq64Ajzpp60KMwklYft1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bwIJkL/btrfVtA9IG5/vKDq64Ajzpp60KMwklYft1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bwIJkL/btrfVtA9IG5/vKDq64Ajzpp60KMwklYft1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbwIJkL%2FbtrfVtA9IG5%2FvKDq64Ajzpp60KMwklYft1%2Fimg.png&quot; data-origin-width=&quot;987&quot; data-origin-height=&quot;706&quot; data-filename=&quot;그림5.png&quot; width=&quot;559&quot; height=&quot;400&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;위와 관련된 내용&lt;/b&gt;은 &lt;b&gt;아래 링크&lt;/b&gt;(글)에서 &lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&amp;ldquo;&lt;/span&gt;3. Adversarial nets&amp;rdquo;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt; &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;rarr;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt; &lt;/span&gt;&lt;/b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;b&gt;&amp;ldquo;[3-2-2. Second paragraph &amp;amp; Second sentence]&quot;&lt;/b&gt; 부분을 &lt;b&gt;참고&lt;/b&gt;해주세요.&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;a href=&quot;https://89douner.tistory.com/329?category=908620&quot;&gt;https://89douner.tistory.com/329?category=908620&lt;/a&gt;&amp;nbsp;&lt;/p&gt;
&lt;figure id=&quot;og_1632707424444&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;5-1. GAN (Part1. GAN architecture)&quot; data-og-description=&quot;안녕하세요. 이번 글에서는 최초의 GAN 논문인 &amp;quot;Generative Adversarial Nets&amp;quot;을 리뷰하려고 합니다. 우선, GAN이라는 모델이 설명할 내용이 많다고 판단하여 파트를 두 개로 나누었습니다. Part1에서는 GAN a&quot; data-og-host=&quot;89douner.tistory.com&quot; data-og-source-url=&quot;https://89douner.tistory.com/329?category=908620&quot; data-og-url=&quot;https://89douner.tistory.com/329&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/ballNu/hyLKjFZ2aM/UKt1OKzbPMmIBJqX03JeJ0/img.png?width=800&amp;amp;height=415&amp;amp;face=0_0_800_415,https://scrap.kakaocdn.net/dn/cTZJDH/hyLKsCURuO/SKEdOd1MkOC8jV8Vj7V0Y0/img.png?width=800&amp;amp;height=415&amp;amp;face=0_0_800_415,https://scrap.kakaocdn.net/dn/DIERG/hyLKmCI0LZ/zIxNcZkocbufzRjAxgU111/img.png?width=1054&amp;amp;height=539&amp;amp;face=0_0_1054_539&quot;&gt;&lt;a href=&quot;https://89douner.tistory.com/329?category=908620&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://89douner.tistory.com/329?category=908620&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/ballNu/hyLKjFZ2aM/UKt1OKzbPMmIBJqX03JeJ0/img.png?width=800&amp;amp;height=415&amp;amp;face=0_0_800_415,https://scrap.kakaocdn.net/dn/cTZJDH/hyLKsCURuO/SKEdOd1MkOC8jV8Vj7V0Y0/img.png?width=800&amp;amp;height=415&amp;amp;face=0_0_800_415,https://scrap.kakaocdn.net/dn/DIERG/hyLKmCI0LZ/zIxNcZkocbufzRjAxgU111/img.png?width=1054&amp;amp;height=539&amp;amp;face=0_0_1054_539');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;5-1. GAN (Part1. GAN architecture)&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;안녕하세요. 이번 글에서는 최초의 GAN 논문인 &quot;Generative Adversarial Nets&quot;을 리뷰하려고 합니다. 우선, GAN이라는 모델이 설명할 내용이 많다고 판단하여 파트를 두 개로 나누었습니다. Part1에서는 GAN a&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;89douner.tistory.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1136&quot; data-origin-height=&quot;425&quot; data-filename=&quot;그림6.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cahpEy/btrga3NSZoy/k4hFkiYWSqBmK1YCTtiZnk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cahpEy/btrga3NSZoy/k4hFkiYWSqBmK1YCTtiZnk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cahpEy/btrga3NSZoy/k4hFkiYWSqBmK1YCTtiZnk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcahpEy%2Fbtrga3NSZoy%2Fk4hFkiYWSqBmK1YCTtiZnk%2Fimg.png&quot; data-origin-width=&quot;1136&quot; data-origin-height=&quot;425&quot; data-filename=&quot;그림6.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1141&quot; data-origin-height=&quot;483&quot; data-filename=&quot;그림7.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/kZ6Ok/btrfWMgnUqw/umyTyokrzXePoSF2TPu4R1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/kZ6Ok/btrfWMgnUqw/umyTyokrzXePoSF2TPu4R1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/kZ6Ok/btrfWMgnUqw/umyTyokrzXePoSF2TPu4R1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FkZ6Ok%2FbtrfWMgnUqw%2FumyTyokrzXePoSF2TPu4R1%2Fimg.png&quot; data-origin-width=&quot;1141&quot; data-origin-height=&quot;483&quot; data-filename=&quot;그림7.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;929&quot; data-origin-height=&quot;439&quot; data-filename=&quot;그림8.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/rB1qE/btrf3yugmS6/R8kkOQ4DPWrqphYyTuML2k/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/rB1qE/btrf3yugmS6/R8kkOQ4DPWrqphYyTuML2k/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/rB1qE/btrf3yugmS6/R8kkOQ4DPWrqphYyTuML2k/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FrB1qE%2Fbtrf3yugmS6%2FR8kkOQ4DPWrqphYyTuML2k%2Fimg.png&quot; data-origin-width=&quot;929&quot; data-origin-height=&quot;439&quot; data-filename=&quot;그림8.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;971&quot; data-origin-height=&quot;288&quot; data-filename=&quot;그림9.png&quot; width=&quot;637&quot; height=&quot;189&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/b9fJMc/btrfXyhRPDs/GCWN5C8ZnPijUoqeuEQTW0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/b9fJMc/btrfXyhRPDs/GCWN5C8ZnPijUoqeuEQTW0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/b9fJMc/btrfXyhRPDs/GCWN5C8ZnPijUoqeuEQTW0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fb9fJMc%2FbtrfXyhRPDs%2FGCWN5C8ZnPijUoqeuEQTW0%2Fimg.png&quot; data-origin-width=&quot;971&quot; data-origin-height=&quot;288&quot; data-filename=&quot;그림9.png&quot; width=&quot;637&quot; height=&quot;189&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;958&quot; data-origin-height=&quot;184&quot; data-filename=&quot;그림10.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bbfctp/btrf1ieJNTc/3mHRdmOWu2hkreOQOxpQLk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bbfctp/btrf1ieJNTc/3mHRdmOWu2hkreOQOxpQLk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bbfctp/btrf1ieJNTc/3mHRdmOWu2hkreOQOxpQLk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fbbfctp%2Fbtrf1ieJNTc%2F3mHRdmOWu2hkreOQOxpQLk%2Fimg.png&quot; data-origin-width=&quot;958&quot; data-origin-height=&quot;184&quot; data-filename=&quot;그림10.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1713&quot; data-origin-height=&quot;1039&quot; data-filename=&quot;그림11.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/o4yb2/btrf3sVEjz2/XUbTjj98w9LBUGKWW4P1Dk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/o4yb2/btrf3sVEjz2/XUbTjj98w9LBUGKWW4P1Dk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/o4yb2/btrf3sVEjz2/XUbTjj98w9LBUGKWW4P1Dk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fo4yb2%2Fbtrf3sVEjz2%2FXUbTjj98w9LBUGKWW4P1Dk%2Fimg.png&quot; data-origin-width=&quot;1713&quot; data-origin-height=&quot;1039&quot; data-filename=&quot;그림11.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(&lt;b&gt;지금까지 설명한 내용&lt;/b&gt;을 &lt;b&gt;논문&lt;/b&gt;에서는 &lt;b&gt;아래&lt;/b&gt;와 같이 함축해서 설명하고 있습니다)&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1503&quot; data-origin-height=&quot;834&quot; data-filename=&quot;그림13.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/HklLW/btrfZpkThLe/wiPKxdKh3zJstwN6hc6lKk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/HklLW/btrfZpkThLe/wiPKxdKh3zJstwN6hc6lKk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/HklLW/btrfZpkThLe/wiPKxdKh3zJstwN6hc6lKk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FHklLW%2FbtrfZpkThLe%2FwiPKxdKh3zJstwN6hc6lKk%2Fimg.png&quot; data-origin-width=&quot;1503&quot; data-origin-height=&quot;834&quot; data-filename=&quot;그림13.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;앞서 \(\frac{a}{a+b}\)에서 &lt;b&gt;global optimum&lt;/b&gt;을 갖는걸 파악했습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그렇다면, &lt;b&gt;왜&lt;/b&gt; &lt;b&gt;a=b or&lt;/b&gt; \(P_{data}=P_{g}\)라는 &lt;b&gt;조건&lt;/b&gt; 하에 &lt;b&gt;global optimum&lt;/b&gt;을 갖는지,&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;global optimum 값이 &lt;b&gt;왜 1/2이 되는지 수학적&lt;/b&gt;으로 &lt;b&gt;증명&lt;/b&gt;해보겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;794&quot; data-origin-height=&quot;64&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cTksQH/btrgbK1Jb58/5kmIs8UHbPpiIvQGKigtXK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cTksQH/btrgbK1Jb58/5kmIs8UHbPpiIvQGKigtXK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cTksQH/btrgbK1Jb58/5kmIs8UHbPpiIvQGKigtXK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcTksQH%2FbtrgbK1Jb58%2F5kmIs8UHbPpiIvQGKigtXK%2Fimg.png&quot; data-origin-width=&quot;794&quot; data-origin-height=&quot;64&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1243&quot; data-origin-height=&quot;745&quot; data-filename=&quot;그림15.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/beszCP/btrf1h1qBTl/5j4DokIOfv7wOgBhKq9ks0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/beszCP/btrf1h1qBTl/5j4DokIOfv7wOgBhKq9ks0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/beszCP/btrf1h1qBTl/5j4DokIOfv7wOgBhKq9ks0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbeszCP%2Fbtrf1h1qBTl%2F5j4DokIOfv7wOgBhKq9ks0%2Fimg.png&quot; data-origin-width=&quot;1243&quot; data-origin-height=&quot;745&quot; data-filename=&quot;그림15.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(지금까지 설명한 내용을 논문에서는 아래와 같이 함축해서 설명하고 있습니다)&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1063&quot; data-origin-height=&quot;1143&quot; data-filename=&quot;그림16.png&quot; width=&quot;751&quot; height=&quot;808&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/ZUBTn/btrf7zUtAsv/dhvNpa0yHQ5KrFDMeNYQBk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/ZUBTn/btrf7zUtAsv/dhvNpa0yHQ5KrFDMeNYQBk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/ZUBTn/btrf7zUtAsv/dhvNpa0yHQ5KrFDMeNYQBk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FZUBTn%2Fbtrf7zUtAsv%2FdhvNpa0yHQ5KrFDMeNYQBk%2Fimg.png&quot; data-origin-width=&quot;1063&quot; data-origin-height=&quot;1143&quot; data-filename=&quot;그림16.png&quot; width=&quot;751&quot; height=&quot;808&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;정리하자면, 앞서 언급했던 질문인&amp;nbsp;&lt;b&gt;&lt;i&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;설계한 확률(딥러닝)모델이 최적해(global optimum)를 갖는가?&quot; &lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;에 대해서&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;&quot;최적해(global optimum)&quot;&lt;/b&gt;을 갖는다고 할 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&quot;다시 말해, &lt;/span&gt;&lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;MinMax 구조의 함수&lt;/span&gt;&lt;/b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;에서 \(P_{data}=P_{g}\)&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;일 때&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;, &lt;b&gt;global optimum&lt;/b&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;을 갖는다고 할 수 있습니다&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;.&amp;rdquo;&lt;/span&gt;&lt;/span&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;[ 4-2. Convergence of Algorithm 1 ]&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;앞서, &lt;b&gt;GAN&lt;/b&gt; 이라는 확률(딥러닝) 모델은 &lt;b&gt;loss&lt;/b&gt;를 통해 살펴 봤듯이 &lt;b&gt;최적해(global optimum)&lt;/b&gt;가 &lt;b&gt;존재&lt;/b&gt;하는걸 볼 수 있었습니다. 딥러닝 모델을 &lt;b&gt;해(solution)를 찾을 수 있는 조건&lt;/b&gt; 중 하나가&lt;b&gt; loss가 convex function인지 확인&lt;/b&gt;하는 것입니다. &lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;그래야 grident descent 와 같은 iterative optimization 방식을 통해 해를 찾을 수 있기 때문입니다.&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(&amp;darr;&lt;span&gt;&amp;darr;&lt;/span&gt;&lt;span&gt;&amp;darr;&lt;/span&gt;&lt;b&gt;iterative optimization과 딥러닝과의 관계&lt;/b&gt;를 설명한 글; &lt;b&gt;&quot;3.Adversarial&amp;nbsp;nets&quot;&amp;nbsp;-&amp;gt;&amp;nbsp;&quot;[3-3-2.&amp;nbsp;Third&amp;nbsp;paragraph&amp;nbsp;&amp;amp;&amp;nbsp;The&amp;nbsp;last&amp;nbsp;sentence]&quot;&lt;/b&gt; 부분을 참고&lt;span&gt;&amp;darr;&lt;/span&gt;&lt;span&gt;&amp;darr;&lt;/span&gt;&lt;span&gt;&amp;darr;&lt;/span&gt;)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;a href=&quot;https://89douner.tistory.com/329?category=908620&quot;&gt;https://89douner.tistory.com/329?category=908620&lt;/a&gt;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1632728759208&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;5-1. GAN (Part1. GAN architecture)&quot; data-og-description=&quot;안녕하세요. 이번 글에서는 최초의 GAN 논문인 &amp;quot;Generative Adversarial Nets&amp;quot;을 리뷰하려고 합니다. 우선, GAN이라는 모델이 설명할 내용이 많다고 판단하여 파트를 두 개로 나누었습니다. Part1에서는 GAN a&quot; data-og-host=&quot;89douner.tistory.com&quot; data-og-source-url=&quot;https://89douner.tistory.com/329?category=908620&quot; data-og-url=&quot;https://89douner.tistory.com/329&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/ballNu/hyLKjFZ2aM/UKt1OKzbPMmIBJqX03JeJ0/img.png?width=800&amp;amp;height=415&amp;amp;face=0_0_800_415,https://scrap.kakaocdn.net/dn/cTZJDH/hyLKsCURuO/SKEdOd1MkOC8jV8Vj7V0Y0/img.png?width=800&amp;amp;height=415&amp;amp;face=0_0_800_415,https://scrap.kakaocdn.net/dn/DIERG/hyLKmCI0LZ/zIxNcZkocbufzRjAxgU111/img.png?width=1054&amp;amp;height=539&amp;amp;face=0_0_1054_539&quot;&gt;&lt;a href=&quot;https://89douner.tistory.com/329?category=908620&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://89douner.tistory.com/329?category=908620&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/ballNu/hyLKjFZ2aM/UKt1OKzbPMmIBJqX03JeJ0/img.png?width=800&amp;amp;height=415&amp;amp;face=0_0_800_415,https://scrap.kakaocdn.net/dn/cTZJDH/hyLKsCURuO/SKEdOd1MkOC8jV8Vj7V0Y0/img.png?width=800&amp;amp;height=415&amp;amp;face=0_0_800_415,https://scrap.kakaocdn.net/dn/DIERG/hyLKmCI0LZ/zIxNcZkocbufzRjAxgU111/img.png?width=1054&amp;amp;height=539&amp;amp;face=0_0_1054_539');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;5-1. GAN (Part1. GAN architecture)&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;안녕하세요. 이번 글에서는 최초의 GAN 논문인 &quot;Generative Adversarial Nets&quot;을 리뷰하려고 합니다. 우선, GAN이라는 모델이 설명할 내용이 많다고 판단하여 파트를 두 개로 나누었습니다. Part1에서는 GAN a&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;89douner.tistory.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그럼, 딥러닝 모델 설계 시 고려해야 하는 4가지 질문 중, &lt;b&gt;세 번째 질문&lt;/b&gt;에 대한 답을 설명드려보겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;i&gt;&quot;설계한 확률(딥러닝)모델이 최적해(global optimum)에 수렴하는가?&quot;&lt;/i&gt;&lt;/span&gt;&lt;/b&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;위에 있는 질문을 바꿔서 표현하면 &quot;\(p_{g}\)가 \(p_{data}\)로 수렴할 수 있는가?&quot;로 표현할 수 있습니다. 즉, generative model인 &lt;span&gt;\(p_{g}\) 관점에서 loss 함수가 (global optimum으로) convergence(수렴) 할 수 있는지 따져야 하는 것입니다.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;i&gt;&lt;b&gt;&quot;Generative model인&amp;nbsp;&lt;/b&gt;&lt;/i&gt;&lt;/span&gt;&lt;span&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;i&gt;&lt;b&gt;\(p_{g}\) 관점에서 loss 함수가 (global optimum으로) convergence(수렴) 할 수 있는지 따지기 위해서는 '&lt;span style=&quot;color: #ee2323;&quot;&gt;D를 고정시키고, loss 함수가 convex한지 확인하는 것&lt;/span&gt;' 입니다.&quot;&lt;/b&gt;&lt;/i&gt;&lt;/span&gt;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;아래 그림을 잠깐 설명해보겠습니다. D&lt;span style=&quot;color: #000000;&quot;&gt;는 고정했기 때문에 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;\(U(P_{g},D)\) &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;를 \(P_{g}\) &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;관점에서 보면 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;D&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;는 (고정된) &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;상수값&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;입니다.&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt; 바꿔말해,&lt;span style=&quot;color: #000000;&quot;&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;\(P_{g}\)&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;관점에서 보면 &lt;span style=&quot;color: #000000;&quot;&gt;\(U(P_{g},D)\)&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;의 변수는 &lt;span style=&quot;color: #000000;&quot;&gt;\(P_{g}\)&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;인 셈인 것입니다.&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;nbsp;결국&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;, &lt;span style=&quot;color: #000000;&quot;&gt;\(P_{g}\)&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;가 학습함에 따라서 변할 텐데, 변하는&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;nbsp;&lt;span style=&quot;color: #000000;&quot;&gt;\(P_{g}\)&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;에 따라서 &lt;span style=&quot;color: #000000;&quot;&gt;\(U(P_{g},D)\)&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;가 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;convex&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;한지 알아봐야 iterative optimization(=ex: gradient descent) 방식으로 global optimum을 찾을 수 있는지 확인 할 수 있습니다.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1117&quot; data-origin-height=&quot;707&quot; data-filename=&quot;그림17.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/cXxzj8/btrf9CwVA7b/loNmiJe8Uw0TV0lR2t5BZ0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/cXxzj8/btrf9CwVA7b/loNmiJe8Uw0TV0lR2t5BZ0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/cXxzj8/btrf9CwVA7b/loNmiJe8Uw0TV0lR2t5BZ0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FcXxzj8%2Fbtrf9CwVA7b%2FloNmiJe8Uw0TV0lR2t5BZ0%2Fimg.png&quot; data-origin-width=&quot;1117&quot; data-origin-height=&quot;707&quot; data-filename=&quot;그림17.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;위의 설명 &lt;/b&gt;중에서 &quot;어떤 함수를 미분했을 때, 해당 함수가 상수 값을 갖는다면 그 함수는 convex하다고 할 수 있다&quot;는 개념이 있습니다. 예를 들어 보충 설명하면, &lt;span style=&quot;color: #000000;&quot;&gt;2x+1라는 수식에서&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;nbsp;x&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;에 대해서 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;미분하면&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;미분값&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;=2가 나옵니다. 즉, 어떤 수식을 x로 미분 했더니 상수가 나오면 그 수식은 선형함수가 되는 것입니다. 그리고, &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;선형함수는 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;convex 라고 할 수 있는데,&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;특정 구간 (ex=&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;(0,1))&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;을 설정해놓으면&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;해당 구간에서 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;minimum &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;값을 찾을 수 있게 됩니다.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;(&amp;darr;&amp;darr;&amp;darr;linear function이 convex function 임을 보여주는 글&amp;darr;&amp;darr;&amp;darr;)&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;a href=&quot;https://89douner.tistory.com/221?category=985926&quot;&gt;https://89douner.tistory.com/221?category=985926&lt;/a&gt;&amp;nbsp;&lt;/p&gt;
&lt;figure id=&quot;og_1632741396073&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;[Convex function]2-2. Example of Convex function&quot; data-og-description=&quot;※시간이 충분하지 않아 필기로 정리한 내용을 아직 블로그 글로 옮기지 못해 이미지로 공유하는점 양해부탁드립니다. 아래 내용의 키워드는 다음과 같습니다. Exponential function, affine function, powe&quot; data-og-host=&quot;89douner.tistory.com&quot; data-og-source-url=&quot;https://89douner.tistory.com/221?category=985926&quot; data-og-url=&quot;https://89douner.tistory.com/221&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/YbBa0/hyLKhoh8bN/nxAGh5kc6KiOmY9bLkjBi0/img.png?width=793&amp;amp;height=1121&amp;amp;face=0_0_793_1121,https://scrap.kakaocdn.net/dn/vZj76/hyLKlc6jKM/amHtWFUxfRorMKbV0gyq4k/img.png?width=793&amp;amp;height=1121&amp;amp;face=0_0_793_1121&quot;&gt;&lt;a href=&quot;https://89douner.tistory.com/221?category=985926&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://89douner.tistory.com/221?category=985926&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/YbBa0/hyLKhoh8bN/nxAGh5kc6KiOmY9bLkjBi0/img.png?width=793&amp;amp;height=1121&amp;amp;face=0_0_793_1121,https://scrap.kakaocdn.net/dn/vZj76/hyLKlc6jKM/amHtWFUxfRorMKbV0gyq4k/img.png?width=793&amp;amp;height=1121&amp;amp;face=0_0_793_1121');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;[Convex function]2-2. Example of Convex function&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;※시간이 충분하지 않아 필기로 정리한 내용을 아직 블로그 글로 옮기지 못해 이미지로 공유하는점 양해부탁드립니다. 아래 내용의 키워드는 다음과 같습니다. Exponential function, affine function, powe&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;89douner.tistory.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1204&quot; data-origin-height=&quot;624&quot; data-filename=&quot;그림19.png&quot; width=&quot;721&quot; height=&quot;374&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/oqMGj/btrgbbFDg1o/lV3dSiKQH3SKCn7XXPY41K/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/oqMGj/btrgbbFDg1o/lV3dSiKQH3SKCn7XXPY41K/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/oqMGj/btrgbbFDg1o/lV3dSiKQH3SKCn7XXPY41K/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FoqMGj%2FbtrgbbFDg1o%2FlV3dSiKQH3SKCn7XXPY41K%2Fimg.png&quot; data-origin-width=&quot;1204&quot; data-origin-height=&quot;624&quot; data-filename=&quot;그림19.png&quot; width=&quot;721&quot; height=&quot;374&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1352&quot; data-origin-height=&quot;733&quot; data-filename=&quot;그림23.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/pBMCq/btrgbKgIfZ6/errcKfyFsgRaSSAlmORKwK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/pBMCq/btrgbKgIfZ6/errcKfyFsgRaSSAlmORKwK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/pBMCq/btrgbKgIfZ6/errcKfyFsgRaSSAlmORKwK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FpBMCq%2FbtrgbKgIfZ6%2FerrcKfyFsgRaSSAlmORKwK%2Fimg.png&quot; data-origin-width=&quot;1352&quot; data-origin-height=&quot;733&quot; data-filename=&quot;그림23.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;▼&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(지금까지 설명한 내용을 논문에서는 아래와 같이 함축해서 설명하고 있음)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1422&quot; data-origin-height=&quot;778&quot; data-filename=&quot;그림24.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/kw4nB/btrfXG1h6RQ/kVbQlOeIZ7eHr4BgjdQ341/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/kw4nB/btrfXG1h6RQ/kVbQlOeIZ7eHr4BgjdQ341/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/kw4nB/btrfXG1h6RQ/kVbQlOeIZ7eHr4BgjdQ341/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fkw4nB%2FbtrfXG1h6RQ%2FkVbQlOeIZ7eHr4BgjdQ341%2Fimg.png&quot; data-origin-width=&quot;1422&quot; data-origin-height=&quot;778&quot; data-filename=&quot;그림24.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;[그 외 위에서 본 수식과 관련된 background or 용어]&lt;/b&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;650&quot; data-origin-height=&quot;25&quot; data-filename=&quot;그림20.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/wZRZV/btrf7AsDTo9/0AFRNpNfJczmT8KPaKryL1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/wZRZV/btrf7AsDTo9/0AFRNpNfJczmT8KPaKryL1/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/wZRZV/btrf7AsDTo9/0AFRNpNfJczmT8KPaKryL1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FwZRZV%2Fbtrf7AsDTo9%2F0AFRNpNfJczmT8KPaKryL1%2Fimg.png&quot; data-origin-width=&quot;650&quot; data-origin-height=&quot;25&quot; data-filename=&quot;그림20.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;[ 4-2. Algorithm 1 ]&lt;/b&gt;&lt;/h4&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;설계한 확률(딥러닝)모델의 최적해(global optimum)을 찾을 수 있는 알고리즘이 존재하는가?&quot;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: left;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;이 논문에서는 아래와 같은 방식으로 &lt;b&gt;D와 G를 서로 학습&lt;/b&gt;함으로써&lt;b&gt; 최적해(global optimum)&lt;/b&gt;을 찾을 수 있음을 보여줬습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;804&quot; data-origin-height=&quot;520&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/d0bc6Q/btrgdFlF30C/1kEitm2jCLTcKVt5kRBw61/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/d0bc6Q/btrgdFlF30C/1kEitm2jCLTcKVt5kRBw61/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/d0bc6Q/btrgdFlF30C/1kEitm2jCLTcKVt5kRBw61/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fd0bc6Q%2FbtrgdFlF30C%2F1kEitm2jCLTcKVt5kRBw61%2Fimg.png&quot; data-origin-width=&quot;804&quot; data-origin-height=&quot;520&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;위의 알고리즘을 실제로 코딩&lt;/b&gt;할 때 &lt;b&gt;아래와 같이 구현&lt;/b&gt;할 수 있습니다. (D학습시 따로 k step을 두진 않은 것 같네요)&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;(&amp;darr;&amp;darr;&lt;span&gt;&amp;darr;&lt;/span&gt;D, G 관련 loss가 왜 아래와 같이 변하는지는 &lt;a href=&quot;https://89douner.tistory.com/329&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;GAN part1&lt;/a&gt;에서 설명 함&lt;span&gt;&amp;darr;&lt;/span&gt;&lt;span&gt;&amp;darr;&lt;/span&gt;&lt;span&gt;&amp;darr;&lt;/span&gt;)&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;487&quot; data-origin-height=&quot;327&quot; data-filename=&quot;그림25.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/B3V8n/btrgbJvmGDG/y6y6fVS9sCHOWq5JljnCw0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/B3V8n/btrgbJvmGDG/y6y6fVS9sCHOWq5JljnCw0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/B3V8n/btrgbJvmGDG/y6y6fVS9sCHOWq5JljnCw0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FB3V8n%2FbtrgbJvmGDG%2Fy6y6fVS9sCHOWq5JljnCw0%2Fimg.png&quot; data-origin-width=&quot;487&quot; data-origin-height=&quot;327&quot; data-filename=&quot;그림25.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;[Neural network를 Generative model에 도입할 경우]&lt;/b&gt;&lt;/h4&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;773&quot; data-origin-height=&quot;117&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/yR5jJ/btrga2px9r5/aA5Ky4C25acZkBcKJpdzH0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/yR5jJ/btrga2px9r5/aA5Ky4C25acZkBcKJpdzH0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/yR5jJ/btrga2px9r5/aA5Ky4C25acZkBcKJpdzH0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FyR5jJ%2Fbtrga2px9r5%2FaA5Ky4C25acZkBcKJpdzH0%2Fimg.png&quot; data-origin-width=&quot;773&quot; data-origin-height=&quot;117&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;GAN을 통해 생성하는 데이터의 분포 \(p_{g}\)를 추정하고 최적화 하는 것이 아니라, \(p_{g}\)를 생성하는데 직접적인 영향을 미치는 parameter인 \(\theta_{g}\)를 추정하고 최적화 합니다. 또한, 엄밀히 말하자면 &lt;b&gt;수학적인 증명에서는 generative model, discriminative model이 neural network를 사용한다는 언급이 없습&lt;/b&gt;니다. 그런데, &lt;b&gt;Generative model을 MLP(multilayer perceptron; neural network; deep learning)와 같은 것으로 설정하게 되면&lt;/b&gt; 실제로 &lt;b&gt;최종 loss function 자체가 convex하지 않을 수 있고, 즉 multiple critical points를 가질 수 있습&lt;/b&gt;니다. 하지만, 마법같은 MLP는 이러한 theoretical guarantees가 굳이 보장되지 않고도 나름 생성모델에서 잘 작동한다고 주장하고 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;5. Advantages and disadvantages&lt;/h3&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;이 부분(&quot;5. Advantages and disadvantages&quot;)은&lt;b&gt; GAN&lt;/b&gt;의 대표적인 &lt;b&gt;disadvantage&lt;/b&gt;라고 할 수 있는 &lt;b&gt;mode collapse&lt;/b&gt;에 대해서만 설명하도록 하겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;먼저, &lt;b&gt;mode&lt;/b&gt;라는 &lt;b&gt;용어&lt;/b&gt;에 대해서 이해하도록 하겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;보통 통계학에서 mean, median, mode, range라는 용어가 사용되는데, 해당 용어들이 의미하는바는 아래와 같습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;500&quot; data-origin-height=&quot;707&quot; data-filename=&quot;Mean-Median-Mode-and-Range-e1480829559507-1.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bktzUW/btrgbKPCR3y/r2SwaGlE7RvGkyJXzby1lK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bktzUW/btrgbKPCR3y/r2SwaGlE7RvGkyJXzby1lK/img.png&quot; data-alt=&quot;&amp;amp;amp;lt;그림 출처. https://danielmiessler.com/blog/difference-median-mean/&amp;amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bktzUW/btrgbKPCR3y/r2SwaGlE7RvGkyJXzby1lK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbktzUW%2FbtrgbKPCR3y%2Fr2SwaGlE7RvGkyJXzby1lK%2Fimg.png&quot; data-origin-width=&quot;500&quot; data-origin-height=&quot;707&quot; data-filename=&quot;Mean-Median-Mode-and-Range-e1480829559507-1.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처. https://danielmiessler.com/blog/difference-median-mean/&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;b&gt;위의 개념&lt;/b&gt;들을&lt;b&gt; distribution 상에서 설명&lt;/b&gt;하면 아래와 같이 표현할 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;754&quot; data-origin-height=&quot;320&quot; data-filename=&quot;0_wHMvuwRa_YF9SFwY.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/lFmCg/btrga2caLHy/WXBu2fgU5MkYATI1FlZT20/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/lFmCg/btrga2caLHy/WXBu2fgU5MkYATI1FlZT20/img.png&quot; data-alt=&quot;&amp;amp;amp;lt;그림출처. https://medium.com/@nhan.tran/mean-median-an-mode-in-statistics-3359d3774b0b&amp;amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/lFmCg/btrga2caLHy/WXBu2fgU5MkYATI1FlZT20/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FlFmCg%2Fbtrga2caLHy%2FWXBu2fgU5MkYATI1FlZT20%2Fimg.png&quot; data-origin-width=&quot;754&quot; data-origin-height=&quot;320&quot; data-filename=&quot;0_wHMvuwRa_YF9SFwY.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림출처. https://medium.com/@nhan.tran/mean-median-an-mode-in-statistics-3359d3774b0b&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;602&quot; data-origin-height=&quot;338&quot; data-filename=&quot;image.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/wT0wx/btrgdHx5JFK/91HeHLEdYdb0BiusYO3LO0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/wT0wx/btrgdHx5JFK/91HeHLEdYdb0BiusYO3LO0/img.png&quot; data-alt=&quot;&amp;amp;amp;lt;그림 출처. https://velog.io/@tobigs-gm1/basicofgan&amp;amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/wT0wx/btrgdHx5JFK/91HeHLEdYdb0BiusYO3LO0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FwT0wx%2FbtrgdHx5JFK%2F91HeHLEdYdb0BiusYO3LO0%2Fimg.png&quot; data-origin-width=&quot;602&quot; data-origin-height=&quot;338&quot; data-filename=&quot;image.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처. https://velog.io/@tobigs-gm1/basicofgan&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;쉽게 말해 mode는 빈도수가 가장 많은 수를 일컫습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;i&gt;&lt;b&gt;&quot;The mode is the most frequent value.&quot;&lt;/b&gt;&lt;/i&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그렇다면 이제 mode collpase에 대해 설명해보도록 하겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;MinMax Value function을 살펴보면, 학습 시 G(z)가 생성하는 이미지의 종류에 대해서는 고려하지 않습니다. 예를 들어서, G(z)가 MNIST의 1이라는 이미지를 생성하는데 D가 계속해서 판별을 못하고, G(z)가 생성하는 MNIST의 2라는 이미지만 잘 판별해도 되면 결국 value loss 값은 낮아지게 되는 것이죠. 극단적으로 설명해서 MinMax 방식으로 서로 학습하면 D가 만족하는 (판별을 잘 할 수 있는) 이미지(from G(z))만 생성하도록 G가 학습할 것이며, 결국 (서로 좋은쪽으로만) 편향되게 D와 G는 alternative하게 학습하게 될 것 입니다.&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;593&quot; data-origin-height=&quot;44&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/t0gXh/btrgiO3SSqS/49UizYgveFH71pkzXlz9X0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/t0gXh/btrgiO3SSqS/49UizYgveFH71pkzXlz9X0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/t0gXh/btrgiO3SSqS/49UizYgveFH71pkzXlz9X0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Ft0gXh%2FbtrgiO3SSqS%2F49UizYgveFH71pkzXlz9X0%2Fimg.png&quot; data-origin-width=&quot;593&quot; data-origin-height=&quot;44&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;앞서 설명한 것을 다르게 표현하면, G(z)는 MNIST의 1의 distribution은 잘 표현해주지 못하고, 2라는 이미지의 distribution만 잘 표현해도 전체 value function의 loss 값은 낮게 나올 수 있습니다. 이렇게 되면 Generative model G가 특정 숫자만 생성하게 되겠죠.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;2016&quot; data-origin-height=&quot;908&quot; data-filename=&quot;l9sDQK6.png&quot; width=&quot;624&quot; height=&quot;281&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/9vVHT/btrga1RSljj/vK0o6pFjOBJkVGwuyP8NTk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/9vVHT/btrga1RSljj/vK0o6pFjOBJkVGwuyP8NTk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/9vVHT/btrga1RSljj/vK0o6pFjOBJkVGwuyP8NTk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F9vVHT%2Fbtrga1RSljj%2FvK0o6pFjOBJkVGwuyP8NTk%2Fimg.png&quot; data-origin-width=&quot;2016&quot; data-origin-height=&quot;908&quot; data-filename=&quot;l9sDQK6.png&quot; width=&quot;624&quot; height=&quot;281&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;Mode collpase를 정리하자면 아래와 같이 표현할 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;i&gt;&lt;span style=&quot;color: #3c4043;&quot;&gt;&quot;Mode collapse happens&amp;nbsp;&lt;/span&gt;when the generator can only produce a single type of output or a small set of outputs&lt;span style=&quot;color: #3c4043;&quot;&gt;. This may happen due to problems in training, such as the generator finds a type of data that is easily able to fool the discriminator and thus keeps generating that one type.&quot;&lt;/span&gt;&lt;/i&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1118&quot; data-origin-height=&quot;712&quot; data-filename=&quot;그림31.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bondQT/btrgbaAUupJ/CEvIA6ubnf9PwIXwSN6tE0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bondQT/btrgbaAUupJ/CEvIA6ubnf9PwIXwSN6tE0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bondQT/btrgbaAUupJ/CEvIA6ubnf9PwIXwSN6tE0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FbondQT%2FbtrgbaAUupJ%2FCEvIA6ubnf9PwIXwSN6tE0%2Fimg.png&quot; data-origin-width=&quot;1118&quot; data-origin-height=&quot;712&quot; data-filename=&quot;그림31.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;위의 그림을 설명하면 다음과 같습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;먼저 &lt;/span&gt;&lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;3&lt;/span&gt;&lt;/b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;b&gt;개의 검은색 실선&lt;/b&gt;이 실제 이미지를 나타내는 것이라고 볼 수 있습니다&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;.&amp;nbsp; &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;예를 들어&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;b&gt;각각&lt;/b&gt;의 &lt;b&gt;실선&lt;/b&gt;들은 &lt;b&gt;금발여성&lt;/b&gt;&lt;/span&gt;&lt;b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;, &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;흑발과 안경을 쓰고 있는 남성&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;, &lt;/span&gt;&lt;/b&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&lt;b&gt;흑발 여성&lt;/b&gt;을 나타낸다고 해보겠습니다&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;. &lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;b&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&quot;즉 우리가 원하는 목표는 실선 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;3&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;개를 모두 표현해줄 수 있는 확률 분포를 익히는 것이라고 볼 수 있습니다&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;.&quot;&lt;/span&gt;&lt;/span&gt;&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;가운데 실선에 해당하는 데이터 셋이 제일 많다고 하면&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;, VAE&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;는 제일 데이터 셋이 많은 그룹에 맞게 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;distribution&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;을 형성하면서 동시에&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;nbsp;normal distribution&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;을 이용해 주변 이미지 그룹들도 다 포괄할 수 있도록 학습을 하기&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt; 때문에&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt; 모든 데이터 그룹을 포괄하려는 것을 볼 수 있습니다&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;반대로 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;GAN&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;은 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;generator&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;가 만드는 이미지 그룹을 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;discriminator&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;가 더 이상 구별해내지 못할 때 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;(real&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt; image&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;가 들어왔을 때 맞출 확률 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;frac12;, fake image&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;가 들어왔을 때 맞출 확률 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;&amp;frac12; 이 될 때) &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;학습이 종료가 됩니다&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;. &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;그렇기 때문에 이미지 데이터 셋이 많은 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;(&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;가운데 실선에 해당하는&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;) &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;그룹만 학습하다 끝내버리면 해당 그룹만 잘 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;generation&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;해주는 현상이 발생 할 수 있습니다&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&lt;span style=&quot;color: #000000;&quot;&gt;각각의 실선 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;(&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;이미지 그룹&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;)&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;이 우리가 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;representation (By generator)&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;해야 할 중요한 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;mode&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;라고 부르기도 하는데&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;, GAN&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;에서는 이러한 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;mode&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;들을 잘 학습하는 것이 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;안되는&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt; 붕괴&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;(collapsing) &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;현상이 일어난다고 하여 &lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;mode collapsing&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;의 단점을 갖고 있다고도 합니다&lt;/span&gt;&lt;span style=&quot;color: #000000;&quot;&gt;.&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;그 외 mode collpase에 대한 설명이 필요하시면 아래 링크를 참고해주시면 좋을 것 같습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;a href=&quot;http://jaejunyoo.blogspot.com/2017/02/unrolled-generative-adversarial-network-1.html&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;http://jaejunyoo.blogspot.com/2017/02/unrolled-generative-adversarial-network-1.html&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1632813381583&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;website&quot; data-og-title=&quot;초짜 대학원생의 입장에서 이해하는 Unrolled Generative Adversarial Networks (1)&quot; data-og-description=&quot;Easy explanation for unrolled generative adversarial network (쉽게 풀어 설명하는 Unrolled GAN)&quot; data-og-host=&quot;jaejunyoo.blogspot.com&quot; data-og-source-url=&quot;http://jaejunyoo.blogspot.com/2017/02/unrolled-generative-adversarial-network-1.html&quot; data-og-url=&quot;http://jaejunyoo.blogspot.com/2017/02/unrolled-generative-adversarial-network-1.html&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/bjngPD/hyLLLBUY3z/6eXZSLIpy2af14A3UrX5G0/img.png?width=391&amp;amp;height=205&amp;amp;face=0_0_391_205&quot;&gt;&lt;a href=&quot;http://jaejunyoo.blogspot.com/2017/02/unrolled-generative-adversarial-network-1.html&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;http://jaejunyoo.blogspot.com/2017/02/unrolled-generative-adversarial-network-1.html&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/bjngPD/hyLLLBUY3z/6eXZSLIpy2af14A3UrX5G0/img.png?width=391&amp;amp;height=205&amp;amp;face=0_0_391_205');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;초짜 대학원생의 입장에서 이해하는 Unrolled Generative Adversarial Networks (1)&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;Easy explanation for unrolled generative adversarial network (쉽게 풀어 설명하는 Unrolled GAN)&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;jaejunyoo.blogspot.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;a href=&quot;https://wandb.ai/authors/DCGAN-ndb-test/reports/Measuring-Mode-Collapse-in-GANs--VmlldzoxNzg5MDk&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://wandb.ai/authors/DCGAN-ndb-test/reports/Measuring-Mode-Collapse-in-GANs--VmlldzoxNzg5MDk&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1632813566889&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;Measuring Mode Collapse in GANs&quot; data-og-description=&quot;Evaluate and quantitatively measure the GAN failure case of mode collapse - when the model fails to generate diverse enough outputs.&quot; data-og-host=&quot;wandb.ai&quot; data-og-source-url=&quot;https://wandb.ai/authors/DCGAN-ndb-test/reports/Measuring-Mode-Collapse-in-GANs--VmlldzoxNzg5MDk&quot; data-og-url=&quot;https://wandb.ai/authors/DCGAN-ndb-test/reports/Measuring-Mode-Collapse-in-GANs--VmlldzoxNzg5MDk&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/cc1vz7/hyLLBMPkAD/08FMryutb3pf7OFiYrd2wK/img.png?width=300&amp;amp;height=300&amp;amp;face=32_32_264_294&quot;&gt;&lt;a href=&quot;https://wandb.ai/authors/DCGAN-ndb-test/reports/Measuring-Mode-Collapse-in-GANs--VmlldzoxNzg5MDk&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://wandb.ai/authors/DCGAN-ndb-test/reports/Measuring-Mode-Collapse-in-GANs--VmlldzoxNzg5MDk&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/cc1vz7/hyLLBMPkAD/08FMryutb3pf7OFiYrd2wK/img.png?width=300&amp;amp;height=300&amp;amp;face=32_32_264_294');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;Measuring Mode Collapse in GANs&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;Evaluate and quantitatively measure the GAN failure case of mode collapse - when the model fails to generate diverse enough outputs.&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;wandb.ai&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h3 data-ke-size=&quot;size23&quot;&gt;6. Experiments&lt;/h3&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;774&quot; data-origin-height=&quot;181&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/c9XHxa/btrgbaNCpVa/PEtubuaoyR8L71aCdbQ080/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/c9XHxa/btrgbaNCpVa/PEtubuaoyR8L71aCdbQ080/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/c9XHxa/btrgbaNCpVa/PEtubuaoyR8L71aCdbQ080/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fc9XHxa%2FbtrgbaNCpVa%2FPEtubuaoyR8L71aCdbQ080%2Fimg.png&quot; data-origin-width=&quot;774&quot; data-origin-height=&quot;181&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;사용된 데이터셋&lt;/b&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;MNIST&lt;/li&gt;
&lt;li&gt;Toronto Face Database (TFD)&lt;/li&gt;
&lt;li&gt;CIFAR-10&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;Activation function&lt;/b&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;Generator nets: a mixture of rectifier activations and sigmoid activations&lt;/li&gt;
&lt;li&gt;Discriminator net: maxout activations&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;Drop out&lt;/b&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;discriminator net 학습시에만 drop out 적용&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;b&gt;&amp;nbsp;Noise&lt;/b&gt;&lt;/p&gt;
&lt;ul style=&quot;list-style-type: disc;&quot; data-ke-list-type=&quot;disc&quot;&gt;
&lt;li&gt;이론적으로는 intermediate layers of generator에 주겠금 되어있음&lt;/li&gt;
&lt;li&gt;하지만, 실제 구현 상에서는 &quot;the bottommost layer of the generator network&quot;의 input에 해당하는 data에 noise를 주어 학습 시킴&lt;/li&gt;
&lt;/ul&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p style=&quot;text-align: center;&quot; data-ke-size=&quot;size18&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;&quot;&lt;span style=&quot;color: #282829;&quot;&gt;Conventionally people usually draw neural networks from left to right or from bottom-up (input to output). So by &amp;ldquo;top layer&amp;rdquo; it&amp;rsquo;s more likely to be the last (output) layer.&quot;&lt;/span&gt;&lt;/span&gt;&lt;/i&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;1580&quot; data-origin-height=&quot;1092&quot; data-filename=&quot;스크린샷 2020-02-26 오후 9.23.13.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dLxrRw/btrgbArMKXQ/NmWeRHKF0lrVl30XsGGRT1/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dLxrRw/btrgbArMKXQ/NmWeRHKF0lrVl30XsGGRT1/img.png&quot; data-alt=&quot;&amp;amp;amp;lt;그림 출처: https://velog.io/@hwany/GAN&amp;amp;amp;gt;&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dLxrRw/btrgbArMKXQ/NmWeRHKF0lrVl30XsGGRT1/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdLxrRw%2FbtrgbArMKXQ%2FNmWeRHKF0lrVl30XsGGRT1%2Fimg.png&quot; data-origin-width=&quot;1580&quot; data-origin-height=&quot;1092&quot; data-filename=&quot;스크린샷 2020-02-26 오후 9.23.13.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;figcaption&gt;&amp;lt;그림 출처: https://velog.io/@hwany/GAN&amp;gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;아래 그림 처럼 논문에서는 1과 5 사이의 vector linear interpolation, 7과 1사이의 vector linear interpolation 했을 때의 결과를 보여줍니다.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;710&quot; data-origin-height=&quot;96&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/bx1Xtj/btrf9CxTa7C/SvSAPHPd5uZd9zUsJ47KrK/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/bx1Xtj/btrf9CxTa7C/SvSAPHPd5uZd9zUsJ47KrK/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/bx1Xtj/btrf9CxTa7C/SvSAPHPd5uZd9zUsJ47KrK/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2Fbx1Xtj%2Fbtrf9CxTa7C%2FSvSAPHPd5uZd9zUsJ47KrK%2Fimg.png&quot; data-origin-width=&quot;710&quot; data-origin-height=&quot;96&quot; data-filename=&quot;제목 없음.png&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;아래 수식을 이용하면 실제 이미지 데이터에 대응하는 z 값을 찾을 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;367&quot; data-origin-height=&quot;75&quot; data-filename=&quot;그림28.png&quot; width=&quot;264&quot; height=&quot;54&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/dUQINb/btrgblJc42P/yzH3jKj986q7ZiR5j86Xsk/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/dUQINb/btrgblJc42P/yzH3jKj986q7ZiR5j86Xsk/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/dUQINb/btrgblJc42P/yzH3jKj986q7ZiR5j86Xsk/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2FdUQINb%2FbtrgblJc42P%2FyzH3jKj986q7ZiR5j86Xsk%2Fimg.png&quot; data-origin-width=&quot;367&quot; data-origin-height=&quot;75&quot; data-filename=&quot;그림28.png&quot; width=&quot;264&quot; height=&quot;54&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;이렇게 찾은 z값들 간에 vector linear interpolation을 하게 되면, 7과 1사이의 z 중간 값이 9가 나오는걸 확인할 수 있습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;figure class=&quot;imageblock alignCenter&quot; data-origin-width=&quot;566&quot; data-origin-height=&quot;497&quot; data-filename=&quot;그림29.png&quot; width=&quot;415&quot; height=&quot;364&quot; data-ke-mobilestyle=&quot;widthOrigin&quot;&gt;&lt;span data-url=&quot;https://blog.kakaocdn.net/dn/70MRd/btrga1Yx39B/JhEueKUo4v0MOlL0EqOku0/img.png&quot; data-phocus=&quot;https://blog.kakaocdn.net/dn/70MRd/btrga1Yx39B/JhEueKUo4v0MOlL0EqOku0/img.png&quot;&gt;&lt;img src=&quot;https://blog.kakaocdn.net/dn/70MRd/btrga1Yx39B/JhEueKUo4v0MOlL0EqOku0/img.png&quot; srcset=&quot;https://img1.daumcdn.net/thumb/R1280x0/?scode=mtistory2&amp;fname=https%3A%2F%2Fblog.kakaocdn.net%2Fdn%2F70MRd%2Fbtrga1Yx39B%2FJhEueKUo4v0MOlL0EqOku0%2Fimg.png&quot; data-origin-width=&quot;566&quot; data-origin-height=&quot;497&quot; data-filename=&quot;그림29.png&quot; width=&quot;415&quot; height=&quot;364&quot; data-ke-mobilestyle=&quot;widthOrigin&quot; onerror=&quot;this.onerror=null; this.src='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png'; this.srcset='//t1.daumcdn.net/tistory_admin/static/images/no-image-v1.png';&quot;/&gt;&lt;/span&gt;&lt;/figure&gt;
&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;실제 논문에서는 결과에 대한 여러 해석들이 있는데, 요즘에 사용하고 있는 GAN 평가지표와는 다른 부분이 있어 생략하도록 하겠습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;지금까지 GAN 논문에서 설명하고 있는 이론적 배경에 대해 다루어봤습니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&lt;span style=&quot;font-family: 'Noto Sans Light';&quot;&gt;감사합니다.&lt;/span&gt;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size18&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;h4 data-ke-size=&quot;size20&quot;&gt;&lt;b&gt;[Reference]&lt;/b&gt;&lt;/h4&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;1. Yunjey Choi, NAVER AI Lab&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=odpjk7_tGY0&quot;&gt;https://www.youtube.com/watch?v=odpjk7_tGY0&lt;/a&gt;&amp;nbsp;&lt;/p&gt;
&lt;figure data-ke-type=&quot;video&quot; data-ke-style=&quot;alignCenter&quot; data-video-host=&quot;youtube&quot; data-video-url=&quot;https://www.youtube.com/watch?v=odpjk7_tGY0&quot; data-video-thumbnail=&quot;https://scrap.kakaocdn.net/dn/bPjKWt/hyLKlbYlpj/VL7V0mk678QDEtb6rkcMh0/img.jpg?width=1280&amp;amp;height=720&amp;amp;face=0_0_1280_720&quot; data-video-width=&quot;860&quot; data-video-height=&quot;484&quot; data-video-origin-width=&quot;860&quot; data-video-origin-height=&quot;484&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;iframe src=&quot;https://www.youtube.com/embed/odpjk7_tGY0&quot; width=&quot;860&quot; height=&quot;484&quot; frameborder=&quot;&quot; allowfullscreen=&quot;true&quot;&gt;&lt;/iframe&gt;
&lt;figcaption&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;2. Yoo Jaejun, UNIST&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=kLDuxRtxGD8&quot;&gt;https://www.youtube.com/watch?v=kLDuxRtxGD8&lt;/a&gt;&amp;nbsp;&lt;/p&gt;
&lt;figure data-ke-type=&quot;video&quot; data-ke-style=&quot;alignCenter&quot; data-video-host=&quot;youtube&quot; data-video-url=&quot;https://www.youtube.com/watch?v=kLDuxRtxGD8&quot; data-video-thumbnail=&quot;https://scrap.kakaocdn.net/dn/buSF9u/hyLKrDgs33/erfc8ZXt3KnAltgS3JoBpk/img.jpg?width=1280&amp;amp;height=720&amp;amp;face=0_0_1280_720&quot; data-video-width=&quot;860&quot; data-video-height=&quot;484&quot; data-video-origin-width=&quot;860&quot; data-video-origin-height=&quot;484&quot; data-ke-mobilestyle=&quot;widthContent&quot;&gt;&lt;iframe src=&quot;https://www.youtube.com/embed/kLDuxRtxGD8&quot; width=&quot;860&quot; height=&quot;484&quot; frameborder=&quot;&quot; allowfullscreen=&quot;true&quot;&gt;&lt;/iframe&gt;
&lt;figcaption&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://jaejunyoo.blogspot.com/2017/01/generative-adversarial-nets-2.html&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://jaejunyoo.blogspot.com/2017/01/generative-adversarial-nets-2.html&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1632648962506&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;website&quot; data-og-title=&quot;초짜 대학원생 입장에서 이해하는 Generative Adversarial Nets (2)&quot; data-og-description=&quot;쉽게 풀어 설명하는 Generative Adversarial Nets (GAN)&quot; data-og-host=&quot;jaejunyoo.blogspot.com&quot; data-og-source-url=&quot;https://jaejunyoo.blogspot.com/2017/01/generative-adversarial-nets-2.html&quot; data-og-url=&quot;http://jaejunyoo.blogspot.com/2017/01/generative-adversarial-nets-2.html&quot; data-og-image=&quot;&quot;&gt;&lt;a href=&quot;https://jaejunyoo.blogspot.com/2017/01/generative-adversarial-nets-2.html&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://jaejunyoo.blogspot.com/2017/01/generative-adversarial-nets-2.html&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url();&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;초짜 대학원생 입장에서 이해하는 Generative Adversarial Nets (2)&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;쉽게 풀어 설명하는 Generative Adversarial Nets (GAN)&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;jaejunyoo.blogspot.com&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&lt;a href=&quot;https://brunch.co.kr/@kakao-it/145&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;https://brunch.co.kr/@kakao-it/145&lt;/a&gt;&lt;/p&gt;
&lt;figure id=&quot;og_1632738579519&quot; contenteditable=&quot;false&quot; data-ke-type=&quot;opengraph&quot; data-ke-align=&quot;alignCenter&quot; data-og-type=&quot;article&quot; data-og-title=&quot;[카카오AI리포트]Do you know GAN? 1/2&quot; data-og-description=&quot;유재준 | 카이스트 | 최근 딥러닝 분야에서 가장 뜨겁게 연구되고 있는 주제인&amp;nbsp;GAN(generative adversarial network)을 소개하고 학습할 수 있는글을 연재하려고 한다. GAN은 이름 그대로 뉴럴 네트워크(neur&quot; data-og-host=&quot;brunch.co.kr&quot; data-og-source-url=&quot;https://brunch.co.kr/@kakao-it/145&quot; data-og-url=&quot;https://brunch.co.kr/@kakao-it/145&quot; data-og-image=&quot;https://scrap.kakaocdn.net/dn/u5EK6/hyLKgizgwb/xKPY1ftP56u9xrNrGwWZSk/img.jpg?width=471&amp;amp;height=295&amp;amp;face=34_39_451_284,https://scrap.kakaocdn.net/dn/mJM9z/hyLLCddM6n/M24ga6CYNtLsgYrMjrWbuK/img.jpg?width=500&amp;amp;height=500&amp;amp;face=8_68_434_484,https://scrap.kakaocdn.net/dn/BSF86/hyLLNFOLtq/wsViQnjQZOZrFkjxfc9bPK/img.png?width=1218&amp;amp;height=556&amp;amp;face=471_247_718_517&quot;&gt;&lt;a href=&quot;https://brunch.co.kr/@kakao-it/145&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot; data-source-url=&quot;https://brunch.co.kr/@kakao-it/145&quot;&gt;
&lt;div class=&quot;og-image&quot; style=&quot;background-image: url('https://scrap.kakaocdn.net/dn/u5EK6/hyLKgizgwb/xKPY1ftP56u9xrNrGwWZSk/img.jpg?width=471&amp;amp;height=295&amp;amp;face=34_39_451_284,https://scrap.kakaocdn.net/dn/mJM9z/hyLLCddM6n/M24ga6CYNtLsgYrMjrWbuK/img.jpg?width=500&amp;amp;height=500&amp;amp;face=8_68_434_484,https://scrap.kakaocdn.net/dn/BSF86/hyLLNFOLtq/wsViQnjQZOZrFkjxfc9bPK/img.png?width=1218&amp;amp;height=556&amp;amp;face=471_247_718_517');&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;div class=&quot;og-text&quot;&gt;
&lt;p class=&quot;og-title&quot; data-ke-size=&quot;size16&quot;&gt;[카카오AI리포트]Do you know GAN? 1/2&lt;/p&gt;
&lt;p class=&quot;og-desc&quot; data-ke-size=&quot;size16&quot;&gt;유재준 | 카이스트 | 최근 딥러닝 분야에서 가장 뜨겁게 연구되고 있는 주제인&amp;nbsp;GAN(generative adversarial network)을 소개하고 학습할 수 있는글을 연재하려고 한다. GAN은 이름 그대로 뉴럴 네트워크(neur&lt;/p&gt;
&lt;p class=&quot;og-host&quot; data-ke-size=&quot;size16&quot;&gt;brunch.co.kr&lt;/p&gt;
&lt;/div&gt;
&lt;/a&gt;&lt;/figure&gt;
&lt;p data-ke-size=&quot;size16&quot;&gt;&amp;nbsp;&lt;/p&gt;</description>
      <category>Deep Learning for Computer Vision/Generative Adversarial Networks (GAN)</category>
      <author>Do-Woo-Ner</author>
      <guid isPermaLink="true">https://89douner.tistory.com/331</guid>
      <comments>https://89douner.tistory.com/331#entry331comment</comments>
      <pubDate>Thu, 23 Sep 2021 09:28:12 +0900</pubDate>
    </item>
  </channel>
</rss>