{"id":25440,"date":"2025-07-12T10:29:21","date_gmt":"2025-07-12T08:29:21","guid":{"rendered":"https:\/\/www.wjst.de\/blog\/?p=25440"},"modified":"2025-07-13T07:20:09","modified_gmt":"2025-07-13T05:20:09","slug":"attention-is-all-you-need","status":"publish","type":"post","link":"https:\/\/www.wjst.de\/blog\/sciencesurf\/2025\/07\/attention-is-all-you-need\/","title":{"rendered":"Attention is all you need"},"content":{"rendered":"<p>Here is the link to the famous landmark paper in the recent history <a href=\"https:\/\/arxiv.org\/abs\/1706.03762\">https:\/\/arxiv.org\/abs\/1706.03762<\/a><\/p>\n<p><iframe loading=\"lazy\" title=\"YouTube video player\" src=\"https:\/\/www.youtube.com\/embed\/bCz4OMemCcA?si=L4yJGcf6PYe04k1H\" width=\"560\" height=\"315\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"><\/iframe><\/p>\n<p>&nbsp;<\/p>\n<p>Before this paper, most sequence modeling (e.g., for language) used recurrent neural networks (RNNs) or convolutional neural networks (CNNs). These had significant limitations, such as a difficulty with long-range dependencies and slow training due to sequential processing. The Transformer replaced recurrence with self-attention, enabling parallelization and faster training, while better capturing dependencies in data. So the transformer architecture became the foundation for nearly all state-of-the-art NLP models. This enabled training models with billions of parameters, which is key to achieving high performance in AI tasks.<\/p>\n\n<p>&nbsp;<\/p>\n<div class=\"bottom-note\">\n  <span class=\"mod1\">CC-BY-NC Science Surf , accessed 22.04.2026<\/span>\n <\/div>","protected":false},"excerpt":{"rendered":"<p>Here is the link to the famous landmark paper in the recent history https:\/\/arxiv.org\/abs\/1706.03762 &nbsp; Before this paper, most sequence modeling (e.g., for language) used recurrent neural networks (RNNs) or convolutional neural networks (CNNs). These had significant limitations, such as a difficulty with long-range dependencies and slow training due to sequential processing. The Transformer replaced &hellip; <a href=\"https:\/\/www.wjst.de\/blog\/sciencesurf\/2025\/07\/attention-is-all-you-need\/\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">Attention is all you need<\/span> <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[9],"tags":[5060,5061],"class_list":["post-25440","post","type-post","status-publish","format-standard","hentry","category-computer-software","tag-gpt","tag-transformer"],"_links":{"self":[{"href":"https:\/\/www.wjst.de\/blog\/wp-json\/wp\/v2\/posts\/25440","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.wjst.de\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.wjst.de\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.wjst.de\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.wjst.de\/blog\/wp-json\/wp\/v2\/comments?post=25440"}],"version-history":[{"count":3,"href":"https:\/\/www.wjst.de\/blog\/wp-json\/wp\/v2\/posts\/25440\/revisions"}],"predecessor-version":[{"id":25447,"href":"https:\/\/www.wjst.de\/blog\/wp-json\/wp\/v2\/posts\/25440\/revisions\/25447"}],"wp:attachment":[{"href":"https:\/\/www.wjst.de\/blog\/wp-json\/wp\/v2\/media?parent=25440"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.wjst.de\/blog\/wp-json\/wp\/v2\/categories?post=25440"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.wjst.de\/blog\/wp-json\/wp\/v2\/tags?post=25440"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}