<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>All About AI - SK hynix Newsroom</title>
	<atom:link href="https://skhynix-news-global-stg.mock.pe.kr/tag/all-about-ai/feed/" rel="self" type="application/rss+xml" />
	<link>https://skhynix-news-global-stg.mock.pe.kr</link>
	<description></description>
	<lastBuildDate>Mon, 10 Feb 2025 12:34:09 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.7.2</generator>

 
	<item>
		<title>[All About AI] A Guide to Machine Learning Fundamentals</title>
		<link>https://skhynix-news-global-stg.mock.pe.kr/all-about-ai-a-guide-to-machine-learning-fundamentals/</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Wed, 11 Dec 2024 06:00:06 +0000</pubDate>
				<category><![CDATA[featured]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[All About AI]]></category>
		<category><![CDATA[reinforcement learning]]></category>
		<category><![CDATA[supervised learning]]></category>
		<category><![CDATA[unsupervised learning]]></category>
		<guid isPermaLink="false">https://skhynix-news-global-stg.mock.pe.kr/?p=16829</guid>

					<description><![CDATA[<p>AI has revolutionized people’s lives. For those who want to gain a deeper understanding of AI and use the technology, the SK hynix Newsroom has created the All About AI series. This second episode covers an overview of one of the most significant branches of AI, machine learning. &#160; What is Machine Learning? As introduced [&#8230;]</p>
<p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/all-about-ai-a-guide-to-machine-learning-fundamentals/">[All About AI] A Guide to Machine Learning Fundamentals</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></description>
										<content:encoded><![CDATA[<div style="border: none; background: #D9D9D9; height: auto; padding: 10px 20px; margin-bottom: 10px; color: #000;"><span style="color: #000000; font-size: 18px;">AI has revolutionized people’s lives. For those who want to gain a deeper understanding of AI and use the technology, the SK hynix Newsroom has created the All About AI series. This second episode covers an overview of one of the most significant branches of AI, machine learning.</span></div>
<p>&nbsp;</p>
<h3 class="tit">What is Machine Learning?</h3>
<p>As introduced in the <span style="text-decoration: underline;"><a href="https://news.skhynix.com/all-about-ai-the-origins-evolution-future-of-ai/">first episode</a></span> of the All About AI series, machine learning is an algorithm that autonomously learns patterns from data to make predictions. It has become the dominant methodology in AI, especially with the explosive growth of data in recent years. In contrast, traditional AI models required humans to explicitly program rules and logic. While these models worked well for problems with clear, defined rules, such as simple board games, they had limitations when dealing with complex rules and datasets. For example, consider the task of developing an AI image recognition application to identify cats. This would involve steps such as processing countless pixels, interpreting color data, and establishing rules to distinguish a cat’s patterns, highlighting the challenges of such projects.</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-15944 size-full" title="An overview of AI’s evolution through the decades from the 1950s to the 2020s" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10122800/Machine-learning-models-can-distinguish-dogs-from-cats-by-learning-from-large-datasets-of-labeled-images.png" alt="Machine learning models can distinguish dogs from cats by learning from large datasets of labeled images" width="1000" height="563" /></p>
<p class="source" style="text-align: center;">Figure 1. Machine learning models can distinguish dogs from cats by learning from large datasets of labeled images</p>
<p>&nbsp;</p>
<p>Machine learning involves training a system to discover complex structures and patterns within data, enabling it to learn these patterns independently and make predictions based on new data. As an example, imagine if the aforementioned AI image recognition software to detect cats is created using machine learning. For this process, various photos are collected to train the algorithm, allowing it to learn how to identify cats on its own.</p>
<h3 class="tit">Types and Characteristics of Machine Learning Algorithms</h3>
<p>Machine learning algorithms extract data from underlying probability distributions<sup>1</sup> in the real world to train models for tackling specific problems. These algorithms can be broadly categorized into three types, each with distinct characteristics and applications based on the type of problem.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>1</sup><strong>Probability distribution</strong>: A mathematical model that describes how data is distributed, enabling the identification of patterns and structures within the data. This signals the probability that various outcomes of a given system will occur under different conditions.</p>
<p><strong>1) Supervised learning: </strong>This method involves training a model using inputs paired with correct outputs, or labels, and then analyzing these training pairs to predict outcomes for new data. As an example, imagine developing an AI application to predict a person’s gender from a photo. Here, the photo is the input, and the gender is the label. The model learns the characteristics that distinguish gender and applies this knowledge to predict the gender of individuals in new photos. Supervised learning can be further divided into two types depending on the nature of the labels:</p>
<ul style="color: #000; font-size: 18px; line-height: 1.8;">
<li style="list-style-type: none;">
<ul style="color: #000; font-size: 18px; line-height: 1.8;">
<li style="margin-bottom: 20px;"><strong>Classification:</strong> This algorithm assigns data to specific categories using labels. For example, determining whether a photo contains a dog or distinguishing letters in handwritten text are classification tasks. In these scenarios, the data is assigned to a specific category and labeled accordingly.</li>
<li style="margin-bottom: 20px;"><strong>Regression:</strong> This technique uses labels for continuous values. For example, predicting house prices based on factors such as size and location, or forecasting temperatures from current weather data, are typical regression problems. In these cases, the goal is to predict numerical values as accurately as possible.</li>
</ul>
</li>
</ul>
<p><strong>2) Unsupervised learning: </strong>As the name suggests, unsupervised learning differs from supervised learning in that it learns from data without explicit “supervision”, meaning there are no labels provided. Instead, unsupervised learning focuses on understanding and learning the characteristics of the data’s probability distribution. Key techniques include:</p>
<ul style="color: #000; font-size: 18px; line-height: 1.8;">
<li style="list-style-type: none;">
<ul style="color: #000; font-size: 18px; line-height: 1.8;">
<li style="list-style-type: none;">
<ul style="color: #000; font-size: 18px; line-height: 1.8;">
<li style="margin-bottom: 20px;"><strong>Clustering:</strong> This method groups data with similar characteristics to identify hidden patterns in probability distributions. For example, in a real-world semiconductor manufacturing process, applying a clustering algorithm to images of defective wafers<sup>2</sup> revealed that the defects could be categorized into several types based on their underlying causes.</li>
<li style="margin-bottom: 20px;"><strong>Dimensionality reduction:</strong> This technique simplifies high-dimensional data—datasets with numerous features or variables—while retaining the most important information. Thus, it aids in data analysis and visualization. A common example is principal component analysis (PCA), which removes unnecessary information such as noise—irrelevant or random variations.</li>
</ul>
</li>
</ul>
</li>
</ul>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>2</sup><strong>Wafer</strong>: Generally made from silicon, wafers act as the substrate, or foundation, for semiconductor devices.</p>
<p>Generative AI, which has gained significant traction recently, generally fits into the unsupervised learning category as it learns probability distributions from data to generate new data. For example, ChatGPT learns the probability distribution of natural language to predict the next word in a given text. However, since generative AI often uses supervised learning techniques during training, there is debate about whether it should be considered purely unsupervised learning.</p>
<p><strong>3) Reinforcement learning: </strong>This type of learning focuses on training a model to maximize rewards through interaction with its environment. It is particularly effective for tasks requiring sequential decision-making. Thus, it is widely used in robotics to help robots find optimal paths while avoiding obstacles, as well as in autonomous driving and AI gaming. Recently, reinforcement learning from human feedback (RLHF)<sup>3</sup> has garnered significant attention for ensuring large language models (LLMs)<sup>4</sup> such as ChatGPT, enabling them to generate responses which better align with human preferences.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>3</sup><strong>Reinforcement learning from human feedback (RLHF)</strong>: A method in which a model learns from “rewards” given based on human feedback. By incorporating human evaluations of its outputs, the model adjusts its behavior to generate responses that better match human preferences.<br />
<sup>4</sup><strong>Large language models (LLM)</strong>: Advanced AI systems trained on vast amounts of text data to understand and generate human-like text based on the context they are given.</p>
<p style="text-align: center;"><iframe loading="lazy" src="https://www.youtube.com/embed/TmPfTpjtdgg?si=InDw-C9CAMu2LmaO" width="810" height="455" frameborder="0" allowfullscreen="allowfullscreen"><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start">﻿</span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start">﻿</span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start">﻿</span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start">﻿</span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span></iframe></p>
<p class="source" style="text-align: center;">Figure 2. A video clip showing an AI playing a brick-breaking game. In this classic example of reinforcement learning, the AI is given the instruction to break more bricks to improve its score and learns to do so on its own.</p>
<p>&nbsp;</p>
<h3 class="tit">Evaluating Machine Learning Performance</h3>
<p>The ultimate goal of machine learning is to perform effectively with new, unseen data in real-world situations. In other words, it is important for the model to have generalization capabilities. To this end, it is essential to accurately evaluate and verify the model’s performance, which typically involves the following steps.</p>
<p><strong>1) Choosing Performance Metrics </strong><br />
The choice of performance metrics depends on the type of problem being addressed. In classification problems, common performance metrics include:</p>
<ul style="color: #000; font-size: 18px; line-height: 1.8;">
<li style="list-style-type: none;">
<ul style="color: #000; font-size: 18px; line-height: 1.8;">
<li style="margin-bottom: 20px;"><strong>Accuracy:</strong> Refers to the proportion of correct predictions. For example, if a medical test correctly diagnoses 95 out of 100 cases, the accuracy rate is 95%. However, a meaningful accuracy assessment requires a balanced dataset. If 95 out of 100 samples are negative and only five are positive, a model predicting all samples as negative would still achieve 95% accuracy. This high level of accuracy can be misleading, as the model would fail to identify any positive samples.</li>
<li style="margin-bottom: 20px;"><strong>Precision and recall:</strong> Precision measures the proportion of actual positives among predicted positives, while recall measures the proportion of correctly predicted positives among actual positives. These metrics have a trade-off relationship, so it is essential to optimize the model by considering the balance and objectives between precision and recall. For example, in medical testing, increasing recall is considered crucial to detect as many cases of a condition as possible, while precision might be more important for tasks such as spam mail filtering. To address this trade-off problem, the F1 score<sup>5</sup> is used to evaluate the balance between precision and recall.</li>
</ul>
</li>
</ul>
<p>For regression problems, performance is typically assessed using metrics such as mean squared error (MSE)<sup>6</sup> , root mean squared error (RMSE)<sup>7</sup> , and mean absolute error (MAE)<sup>8</sup>.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>5</sup><strong>F1 score</strong>: The harmonic mean of precision and recall, useful for evaluating a model’s classification performance when class imbalance exists. It ranges from 0 to 1, with higher values indicating better performance.<br />
<sup>6</sup><strong>Mean squared error (MSE)</strong>: The average of the squared differences between predicted and actual values.<br />
<sup>7</sup><strong>Root mean squared error (RMSE)</strong>: The square root of the MSE, providing error measurement in the same units as the actual values.<br />
<sup>8</sup><strong>Mean absolute error (MAE)</strong>: The average of the absolute differences between predicted and actual values.</p>
<p><strong>2) Performance Evaluation Methods</strong></p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-15947 size-full" title="Generative AI explained through key AI subsets" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10122750/A-visual-representation-of-the-train-test-split-method-and-cross-validation-techniques-used-in-machine-learning.png" alt="A visual representation of the train-test split method and cross-validation techniques used in machine learning" width="1000" height="563" /></p>
<p class="source" style="text-align: center;">Figure 3. A visual representation of the train-test split method and cross-validation techniques used in machine learning</p>
<p>&nbsp;</p>
<p>To evaluate machine learning models, data is usually split into training and testing sets. This helps assess how well the model generalizes new data.</p>
<ul style="color: #000; font-size: 18px; line-height: 1.8;">
<li style="list-style-type: none;">
<ul style="color: #000; font-size: 18px; line-height: 1.8;">
<li style="margin-bottom: 20px;"><strong>Train-test split:</strong> As one of the simplest methods, it involves dividing data into a training set and a test set. The model is trained on the training set, and then its predictive performance is evaluated on the test set to gauge generalization performance. Typically, 70–80% of the entire data is used for training, with the remainder reserved for testing.</li>
<li style="margin-bottom: 20px;"><strong>Cross-validation:</strong> This model divides the data into so-called K folds, in which K refers to the number of data subsets, or folds. One fold is used for testing, while the other K-1 folds are used to train the model. This process is repeated K times, and the average performance is calculated. Although cross-validation is common in traditional machine learning, it is time-intensive, making the train-test split the preferred method in deep learning.</li>
</ul>
</li>
</ul>
<p><strong>3) Understanding Model Performance</strong><br />
The results from these evaluation methods provide critical feedback for improving model performance. However, two common phenomena often arise:</p>
<ul style="color: #000; font-size: 18px; line-height: 1.8;">
<li style="list-style-type: none;">
<ul style="color: #000; font-size: 18px; line-height: 1.8;">
<li style="margin-bottom: 20px;"><strong>Underfitting:</strong> This occurs when a model is too simple to learn the underlying patterns in the data, resulting in poor performance on both training and testing sets. For example, consider a regression problem where the actual data follows a quadratic function but the predictive model is set as a linear function. The model may in this case be unable to identify patterns, meaning it has low expressivity as it cannot express complex functions, possibly leading to underfitting.</li>
<li style="margin-bottom: 20px;"><strong>Overfitting:</strong> This occurs when a model is too complex and learns both the data’s patterns and its noise, performing well on training data but poorly on test or new data. To address overfitting and better assess generalization, techniques like cross-validation can be used. Evaluating the model’s performance across various data splits allows for a more accurate assessment of overfitting and helps in selecting the appropriate level of model complexity.</li>
</ul>
</li>
</ul>
<p>To build a model with strong generalization ability, it is widely recognized that a balance must be reached between underfitting and overfitting through methods such as regularization<sup>9</sup>. Interestingly, recent research in deep learning has revealed a double descent<sup>10</sup> phenomenon, where increasing the model size after initial overfitting does not actually worsen overfitting but can improve generalization performance. This discovery has sparked significant research interest in these areas.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>9</sup><strong>Regularization</strong>: A method to prevent overfitting by limiting model complexity or adding penalty terms.<br />
<sup>10</sup><strong>Double descent</strong>: A phenomenon in which model performance worsens with increasing size up to a point, then improves beyond a certain size. This challenges traditional views on overfitting in deep learning, though it remains theoretically unexplained as it is a newly overserved phenomenon in deep learning.</p>
<p>This concludes the All About AI series, which has provided an overview of the basics of AI and machine learning. Stay tuned to the newsroom for the latest updates on AI memory technologies driving innovation in the industry.</p>
<p>&nbsp;</p>
<p><span style="color: #ffffff; background-color: #f59b57;"><strong>&lt;Other articles from this series&gt;</strong></span></p>
<p><span style="text-decoration: underline;"><a href="https://news.skhynix.com/all-about-ai-the-origins-evolution-future-of-ai/">[All About AI] The Origins, Evolution &amp; Future of AI</a></span></p>
<p>&nbsp;</p>
<p><a href="https://linkedin.com/showcase/skhynix-news-and-stories/" target="_blank" rel="noopener noreferrer"><img loading="lazy" decoding="async" class="size-full wp-image-15776 aligncenter" src=" https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2025/02/10074354/SK-hynix_Newsroom-banner_1.png" alt="" width="800" height="135" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/13015412/SK-hynix_Newsroom-banner_1.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/13015412/SK-hynix_Newsroom-banner_1-680x115.png 680w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/13015412/SK-hynix_Newsroom-banner_1-768x130.png 768w" sizes="(max-width: 800px) 100vw, 800px" /></a></p><p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/all-about-ai-a-guide-to-machine-learning-fundamentals/">[All About AI] A Guide to Machine Learning Fundamentals</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>[All About AI] The Origins, Evolution &#038; Future of AI</title>
		<link>https://skhynix-news-global-stg.mock.pe.kr/all-about-ai-the-origins-evolution-future-of-ai/</link>
		
		<dc:creator><![CDATA[user]]></dc:creator>
		<pubDate>Mon, 14 Oct 2024 06:00:50 +0000</pubDate>
				<category><![CDATA[featured]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Generative AI]]></category>
		<category><![CDATA[All About AI]]></category>
		<guid isPermaLink="false">http://admin.news.skhynix.com/?p=15942</guid>

					<description><![CDATA[<p>AI has revolutionized people’s lives. For those who want to gain a deeper understanding of AI and use the technology, the SK hynix Newsroom has created the All About AI series. This first episode covers the historical evolution of AI and explains how it became integrated into today’s world. &#160; AI-powered robots that walk, talk, [&#8230;]</p>
<p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/all-about-ai-the-origins-evolution-future-of-ai/">[All About AI] The Origins, Evolution & Future of AI</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></description>
										<content:encoded><![CDATA[<div style="border: none; background: #D9D9D9; height: auto; padding: 10px 20px; margin-bottom: 10px; color: #000;"><span style="color: #000000; font-size: 18px;">AI has revolutionized people’s lives. For those who want to gain a deeper understanding of AI and use the technology, the SK hynix Newsroom has created the All About AI series. This first episode covers the historical evolution of AI and explains how it became integrated into today’s world.</span></div>
<p>&nbsp;</p>
<p>AI-powered robots that walk, talk, and think like humans have long been a staple of sci-fi comics and movies. However, AI and robotics are no longer merely works of fiction—they have become a reality. Now that AI is here and transforming people’s lives, it is prudent to look back and consider AI’s origins, the milestones which have shaped the technology’s evolution, and consider what the future might hold.</p>
<h3 class="tit">From the Turing Test to Machine Learning: AI’s Early Beginnings</h3>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-15944 size-full" title="An overview of AI’s evolution through the decades from the 1950s to the 2020s" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053213/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_01.png" alt="An overview of AI’s evolution through the decades from the 1950s to the 2020s" width="1000" height="563" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053213/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_01.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053213/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_01-680x383.png 680w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053213/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_01-768x432.png 768w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 1. An overview of AI’s evolution through the decades from the 1950s to the 2020s</p>
<p>&nbsp;</p>
<p>The birth of AI can be traced back to the 1950s. In 1950, British mathematician Alan Turing proposed that machines could “think,” introducing what is now known as the &#8220;Turing test” to evaluate this capability. This is widely recognized to be the first study to present the concept of AI. In 1956, the Dartmouth Summer Research Project on Artificial Intelligence formally introduced the term “AI” to the wider world for the first time. Held in the U.S. state of New Hampshire, the conference fueled further debates on whether machines could learn and evolve like humans.</p>
<p>During the same decade, the development of artificial neural network<sup>1</sup> models marked a significant milestone in computing history. In 1957, U.S. neuropsychologist Frank Rosenblatt introduced the “perceptron” model<sup>2</sup>, empirically demonstrating that computers can learn and recognize patterns. This practical application built on the “neural network theory” developed in 1943 by neurophysiologists Warren McCulloch and Walter Pitts, who conceptualized nerve cell interactions into a simple computational model. Despite these early breakthroughs raising high expectations, research in the field soon stagnated due to limitations in computing power, logical framework, and data availability.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>1</sup><strong>Neural network</strong>: A machine learning program, or model, that makes decisions in a manner similar to the human brain. It creates an adaptive system to make decisions and learn from mistakes.<br />
<sup>2</sup><strong>Perceptron</strong>: The simplest form of a neural network. It is a model of a single neuron that can be used for binary classification problems, enabling it to determine whether an input belongs to one class or another.</p>
<p>Then in the 1980s, “expert system” emerged which operated solely based on human-defined rules. These systems could make automated decisions to perform tasks such as diagnosis, categorization, and analysis in practical fields such as medicine, law, and retail. However, during this period, expert systems were limited by their reliance on rules set by humans and struggled to understand the complexities of the real world.</p>
<p>In the 1990s, AI evolved from following human commands to autonomously learning and discovering new rules by adopting machine learning algorithms. This became possible due to the advent of digital technology and the internet, which provided access to vast amounts of online data. At this point, AI was able to unearth new rules even humans could not discover. This period marked the start of renewed momentum for AI research, based on machine learning.</p>
<h3 class="tit">The Rise of Deep Learning: A Key Technology in AI’s Growth</h3>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-15945 size-full" title="Timeline showing advances in artificial neural networks and deep learning" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053217/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_02.png" alt="Timeline showing advances in artificial neural networks and deep learning" width="1000" height="596" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053217/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_02.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053217/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_02-671x400.png 671w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053217/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_02-768x458.png 768w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 2. Timeline showing advances in artificial neural networks and deep learning</p>
<p>&nbsp;</p>
<p>While the 1990s presented opportunities for AI to grow, the journey and evolution of AI has had its share of setbacks. In 1969, early artificial neural network research hit a roadblock when it was discovered that the perceptron model could not solve nonlinear problems<sup>3</sup>, leading to a prolonged downturn in the field. However, computer scientist Geoffrey Hinton, often hailed as the “godfather of deep learning,” breathed new life into artificial neural network research with his groundbreaking ideas.</p>
<p>For example, in 1986, Hinton applied the backpropagation<sup>4</sup> algorithm to a “multilayer perceptron” model, essentially layers of artificial neural networks, proving it could address the limitations of the initial perceptron model. This seemed to spark a revival in artificial neural networks research, but as the depth of the networks increased, issues began to emerge in the learning process and outcomes.</p>
<p>In 2006, Hinton introduced the “deep belief network (DBN),” which enhanced the performance of a multilayer perceptron, in his paper “A Fast Learning Algorithm for Deep Belief Nets.” By pre-training each layer through unsupervised learning<sup>5</sup> and then fine-tuning the entire network, the DBN significantly improved the speed and efficiency of neural network learning—which had previously been deemed an issue. This progress paved the way for future advancements in deep learning.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>3</sup>The initial perceptron model was a single-layer perceptron that could not solve nonlinear problems such as the XOR problem, which involves two input values; it outputs 0 if the two input values ​​are the same and 1 if they are different.<br />
<sup>4</sup><strong>Backpropagation</strong>: An algorithm used in neural networks to minimize errors by adjusting the weights. It works by calculating the difference between the predicted and actual values and then updating the weights in reverse order, starting from the output layer.<br />
<sup>5</sup><strong>Unsupervised Learning</strong>: A type of machine learning where the model is trained on input data without explicit labels or predefined outcomes. The goal is to discover and understand hidden structures and patterns within the data.</p>
<p>In 2012, deep learning made a historic leap forward when Hinton’s team won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) with their deep learning-based model, AlexNet. This triumph demonstrated deep learning’s immense power by recording an error rate of just 16.4%, surpassing the 25.8% of the previous year’s winner.</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-15946 size-full" title="An Overview of ILSVRC’s Image Recognition Error Rate by Year (Kien Nguyen, Arun Ross, Iris Recognition With Off-the-Shelf CNN Features: A Deep Learning Perspective, IEEE Access, Sept. 2017 p.3)" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053222/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_03.png" alt="An Overview of ILSVRC’s Image Recognition Error Rate by Year (Kien Nguyen, Arun Ross, Iris Recognition With Off-the-Shelf CNN Features: A Deep Learning Perspective, IEEE Access, Sept. 2017 p.3)" width="1000" height="623" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053222/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_03.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053222/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_03-642x400.png 642w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053222/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_03-768x478.png 768w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 3. An Overview of ILSVRC’s Image Recognition Error Rate by Year (Kien Nguyen, Arun Ross, <em>Iris Recognition With Off-the-Shelf CNN Features: A Deep Learning Perspective</em>, IEEE Access, Sept. 2017 p.3)</p>
<p>&nbsp;</p>
<p>Deep learning, a focal point of AI research, has grown rapidly since the 2010s for two primary reasons. First, advances in computer systems, including graphics processing units (GPUs), have driven AI development. Originally designed for graphics processing, GPUs can process repetitive and similar tasks in parallel. This capability enables GPUs to process data faster than central processing units (CPUs). In the 2010s, general-purpose computing on GPUs (GPGPU) emerged, enabling GPUs to be used for broader computational tasks beyond graphics rendering and allowing them to replace CPUs in some instances. The use of GPUs has further increased as they have been utilized for training artificial neural networks, accelerating the development of deep learning. Deep learning, which needs to perform iterative computations during analysis of large datasets to extract features, benefits from the parallel processing capability of GPUs.</p>
<p>Second, the expansion of data resources has fueled progress in deep learning. Training an artificial neural network requires vast amounts of data. In the past, data was primarily sourced from users manually inputting information into computers. However, the explosion of the internet and search engines in the 1990s exponentially increased the range of data available for processing. In the 2000s, the advent of technologies such as smartphones and the Internet of Things (IoT) contributed to the birth of the Big Data era, where real-time information flows from every corner of the globe. Deep learning algorithms use this large quantity of data for training, growing increasingly sophisticated. This data revolution has therefore set the stage for significant advancements in deep learning technology.</p>
<p style="text-align: center;"><iframe loading="lazy" src="https://www.youtube.com/embed/WXuK6gekU1Y?si=W_lg7iEWjh4bfGDb" width="810" height="455" frameborder="0" allowfullscreen="allowfullscreen"><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start">﻿</span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start">﻿</span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start">﻿</span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span><span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span></iframe></p>
<p class="source" style="text-align: center;">Figure 4. Google DeepMind’s <em>AlphaGo </em><em>&#8211; The Movie </em>is a documentary film about the epic battle between AlphaGo and Lee Sedol on March 9, 2016</p>
<p>&nbsp;</p>
<p>By 2016, the evolution of AI reached a dramatic turning point with the development of AlphaGo, an advanced AI program created by Google DeepMind to play the board game Go. This extraordinary AI program captivated the world when it defeated Go grandmaster Lee Sedol by an impressive 4-1 score. Combining deep learning with reinforcement learning<sup>6</sup> and Monte Carlo tree search (MCTS)<sup>7</sup> algorithms, AlphaGo learned to mimic human intuition, predict moves, and strategize through tens of thousands of self-played games. AlphaGo’s victory over a legendary human player signaled the beginning of a new AI era.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>6</sup><strong>Reinforcement Learning</strong>: A type of machine learning where an AI agent learns to make decisions by interacting with an environment. The agent receives rewards or penalties based on its actions and aims to maximize cumulative rewards over time by optimizing its strategy.<br />
<sup>7</sup><strong>Monte Carlo tree search (MCTS)</strong>: A stochastic algorithm that repeatedly generates a series of random numbers to derive a numerical approximation of a function&#8217;s value. It structures the possible actions of the current situation into a search tree and uses random simulations to infer the pros and cons of each, ultimately determining the optimal course of action.</p>
<h3 class="tit">ChatGPT: The Catalyst for the Generative AI Boom</h3>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-15947 size-full" title="Generative AI explained through key AI subsets" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053226/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_04.png" alt="Generative AI explained through key AI subsets" width="1000" height="563" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053226/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_04.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053226/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_04-680x383.png 680w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053226/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_04-768x432.png 768w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 5. Generative AI explained through key AI subsets</p>
<p>&nbsp;</p>
<p>At the close of 2022, humanity stood on the brink of a transformative leap with AI technology. OpenAI unveiled ChatGPT, powered by a type of LLM<sup>8</sup> known as generative pre-trained transformer (GPT) 3.5, marking the dawn of the generative AI era. Most notably, this leap propelled AI into the creative realm, a domain once considered uniquely human. Now, generative AI can produce high-quality content across diverse formats, moving beyond traditional deep learning, which merely predicts or classifies data. Instead, generative AI, using LLMs or various image generation models such as variational autoencoders (VAEs), generative adversarial networks (GANs), and diffusion models, creates original results tailored to user needs.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>8</sup><strong>Large language model (LLM)</strong>: Deep learning algorithms that perform a variety of natural language processing tasks by leveraging extensive data.</p>
<p>To provide a clearer context for the evolution of generative AI, it is essential to examine its origins and key developments. The roots of generative AI trace back to 2014, when American scientist and researcher Ian Goodfellow introduced GANs. In GANs, two neural networks engage in a continuous duel: one generates new data from a dataset, while the other network compares this new data to the original dataset to determine its authenticity. Through this iterative process, GANs produce increasingly refined and sophisticated outputs. Over time, researchers have enhanced and expanded upon this model, leading to its widespread use in applications such as image generation and transformation.</p>
<p>In 2017, the natural language processing (NLP)<sup>9</sup> model “transformer” was introduced. This model considers the relationships between data as key variables. By focusing more attention to certain information, transformers can learn complex data patterns and relationships between data, capturing essential details to produce higher quality results. This advancement transformed NLP tasks such as language comprehension, machine translation, and conversational systems, leading to the development of LLMs such as the aforementioned GPT.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>9</sup><strong>Natural language processing (NLP)</strong>: A subfield of AI that uses algorithms to analyze and process natural language data. By examining syntactic structures, semantic relationships, and contextual patterns, NLP systems can perform tasks such as language translation.</p>
<p>First released in 2018, GPTs have rapidly advanced in performance by expanding their parameters and training on data every year. By 2022, OpenAI’s chatbot ChatGPT, powered by GPT-3.5, completely changed the paradigm of AI. ChatGPT, with its exceptional ability to understand user context, deliver relevant responses, and handle diverse queries, quickly gained traction. <a href="https://www.statista.com/chart/29174/time-to-one-million-users/" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">Within a week of its launch, it drew over 1 million users</span></a> and attracted <a href="https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">more than 100 million active users within two months</span></a>.</p>
<p>The rapid advancements in AI culminated in a major technological leap forward in 2023 with the launch of GPT-4 by OpenAI. This new model is built on a dataset roughly 500 times larger than that of GPT-3.5. GPT-4, now considered a Large Multimodal Model (LMM)<sup>10</sup>, can simultaneously process diverse formats of input data, including images, audio, and video, expanding far beyond its text-only predecessors. In 2024, OpenAI introduced GPT-4o, an enhanced model offering faster, more efficient processing of text, voice, and images. Capitalizing on the generative AI boom triggered by ChatGPT, companies have rolled out diverse services. For example, Google’s Gemini can simultaneously recognize and understand text, images, and audio; Meta’s SAM accurately identifies and isolates objects in images; and OpenAI’s Sora generates videos from text prompts.</p>
<p style="font-size: 14px; font-style: italic; color: #555;"><sup>10</sup><strong>Large Multimodal Model (LMM)</strong>: A deep learning algorithm that can handle many types of data, including images, audio, and more, not just text.</p>
<p>The generative AI market is only beginning to unleash its potential. According to a <a href="https://www.idc.com/getdoc.jsp?containerId=prUS51572023" target="_blank" rel="noopener noreferrer"><span style="text-decoration: underline;">report from the global market research firm International Data Corporation (IDC)</span></a>, the market is set to be worth 40.1 billion USD in 2024—2.7 times larger than the previous year. Looking ahead, the market is expected to continue its growth each year and reach 151.1 billion USD by 2027. As generative AI evolves, its influence will extend beyond software to various formats including hardware and internet services. The world can expect a leap in capabilities and a push towards greater accessibility, making cutting-edge AI technology available to an ever-growing audience.</p>
<h3 class="tit">AI’s Impact on Revolutionizing Today and Redefining Tomorrow</h3>
<p>Just as Google search revolutionized the early 2000s and mobile social media reshaped the 2010s, AI is now driving transformative changes across society. The pace of this technological advancement is unprecedented, and the challenges and concerns of humanity are growing along with it.</p>
<p>So what is the “next generative AI”? The most notable technology around today is perhaps on-device AI. Unlike traditional AI that relies on large cloud servers to pull data to edge devices, on-device AI operates directly on electronic devices such as smartphones and PCs through integrated AI chipsets and smaller LLMs (sLLMs). This shift promises to enhance security, conserve resources, and deliver more personalized AI experiences.</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-15948 size-full" title="Cloud-based AI vs on-device AI structures" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053230/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_05.png" alt="Cloud-based AI vs on-device AI structures" width="1000" height="563" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053230/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_05.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053230/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_05-680x383.png 680w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053230/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_05-768x432.png 768w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 6. Cloud-based AI vs on-device AI structures</p>
<p>&nbsp;</p>
<p>AI will seamlessly integrate into an increasing number of devices, continuously evolving in form and function. Thus, innovations that once seemed like science fiction are becoming reality. For instance, in 2023, U.S. startup Humane launched the AI Pin, a wearable device with a laser-ink display that projects a menu onto the user’s palm. At CES 2024, Rabbit&#8217;s R1 and Brilliant Labs’ Frame showcased their own cutting-edge wearable AI technology. Meanwhile, mixed reality (MR) headsets, like Apple’s Vision Pro and Meta’s Quest, are pushing beyond traditional virtual reality (VR) and metaverse experiences, opening up new markets.</p>
<p>However, as technology races forward, it not only creates new opportunities but also brings about social challenges. The rapid rise of AI has sparked concerns about society’s ability to keep up with these advancements. In particular, AI’s potential misuse and impact on real-world issues has heightened these fears. Sophisticated AI-generated content, such as deepfake videos and manipulated images, creates fake news and disrupts society. Recently, concerns about fake content have intensified in many countries ahead of major elections, including the 2024 U.S. presidential election.</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-15949 size-full" title="Social anxiety and disruption due to deepfake technology portrayed by DALL-E, a generative AI platform" src="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053239/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_06.png" alt="Social anxiety and disruption due to deepfake technology portrayed by DALL-E, a generative AI platform" width="1000" height="563" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053239/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_06.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053239/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_06-680x383.png 680w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/10/07053239/SK-hynix_All-About-AI_The-Origins-Evolution-Future-of-AI_06-768x432.png 768w" sizes="(max-width: 1000px) 100vw, 1000px" /></p>
<p class="source" style="text-align: center;">Figure 7. Social anxiety and disruption due to deepfake technology portrayed by DALL-E, a generative AI platform</p>
<p>&nbsp;</p>
<p>There are also risks associated with the development and use of AI. As generative AI crawls and merges publicly available web contents to train its AI models, there are concerns about plagiarism. Moreover, copyright disputes can arise from creating content using similar prompts with the same generative AI program. The potential for AI to shift from enhancing productivity to replacing jobs and disrupting the labor market presents a troubling reality for some as well.</p>
<p>AI has created a world beyond human imagination. As this new world unfolds, it is crucial to prepare for the changes ahead. Addressing this new era involves thoughtful planning and social discussion. These action items first require a deep understanding of AI’s potential and implications, which will be provided throughout the All About AI series.</p>
<p>&nbsp;</p>
<p><a href="https://linkedin.com/showcase/skhynix-news-and-stories/" target="_blank" rel="noopener noreferrer"><img loading="lazy" decoding="async" class="size-full wp-image-15776 aligncenter" src=" https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/13015412/SK-hynix_Newsroom-banner_1.png" alt="" width="800" height="135" srcset="https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/13015412/SK-hynix_Newsroom-banner_1.png 1000w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/13015412/SK-hynix_Newsroom-banner_1-680x115.png 680w, https://d36ae2cxtn9mcr.cloudfront.net/wp-content/uploads/2024/09/13015412/SK-hynix_Newsroom-banner_1-768x130.png 768w" sizes="(max-width: 800px) 100vw, 800px" /></a></p><p>The post <a href="https://skhynix-news-global-stg.mock.pe.kr/all-about-ai-the-origins-evolution-future-of-ai/">[All About AI] The Origins, Evolution & Future of AI</a> first appeared on <a href="https://skhynix-news-global-stg.mock.pe.kr">SK hynix Newsroom</a>.</p>]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
