<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Danodocs | AI</title><link>https://ai.danfanderson.com/</link><description>Recent content on Danodocs | AI</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Wed, 11 Mar 2026 13:26:44 -0600</lastBuildDate><atom:link href="https://ai.danfanderson.com/index.xml" rel="self" type="application/rss+xml"/><item><title>Prediction Guard - Semantic Layer</title><link>https://ai.danfanderson.com/docs/webinars/pred-guard-ai-semantic-layer/</link><pubDate>Wed, 11 Mar 2026 13:26:44 -0600</pubDate><guid>https://ai.danfanderson.com/docs/webinars/pred-guard-ai-semantic-layer/</guid><description>&lt;hr&gt;
&lt;h2 id="high-level-notes"&gt;High Level Notes &lt;a href="#high-level-notes" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;DA: seems like the core concept was how to create a &amp;ldquo;semantic layer&amp;rdquo; between a user facing LLM or Agent and an enterprise&amp;rsquo;s underlying data (data lakes, OLTP databases, traditional documents, etc.)&lt;/li&gt;
&lt;li&gt;DA: will post slides if/when I get them
&lt;ul&gt;
&lt;li&gt;Dan written notes from the webinar: &lt;a href="https://ai.danfanderson.com/files/pred-guard-semtic-dan-notes.pdf"&gt;Download DA&amp;rsquo;s notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id="three-principles-of-semantic-layering"&gt;Three Principles of Semantic Layering &lt;a href="#three-principles-of-semantic-layering" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;h3 id="principle-1--data-curation"&gt;Principle #1 | Data Curation &lt;a href="#principle-1--data-curation" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h3&gt;&lt;p&gt;Don&amp;rsquo;t unleash all of your enterprise data on the Agent. Curate key subsets of the data and surface that data to the Agent world&lt;/p&gt;</description></item><item><title>Notes &amp; Code Snippets</title><link>https://ai.danfanderson.com/docs/courses-workshops/datacamp-ai-eng-for-devs/notes-code-snippets/</link><pubDate>Wed, 25 Feb 2026 18:50:03 -0600</pubDate><guid>https://ai.danfanderson.com/docs/courses-workshops/datacamp-ai-eng-for-devs/notes-code-snippets/</guid><description>&lt;hr&gt;
&lt;small&gt;
 &lt;a href="https://github.com/danoand/danodocs-ai/blob/master/content/docs/courses-workshops/datacamp-ai-eng-for-devs/notes-code-snippets.md" target="_blank" rel="noopener noreferrer"&gt;
 &lt;svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-github" viewBox="0 0 16 16"&gt;
 &lt;path d="M8 0a8 8 0 0 0-2.53 15.588c.4.074.55-.174.55-.386v-1.42c-2.24.486-2.71-1.07-2.71-1.07-.366-.93-.894-1.176-.894-1.176-.73-.5.056-.49.056-.49.807.056 1.303.83 1.303.83.716 1.23 1.88.875 2.34.669a1.67 1.67 0 0 1 .5-1c-2.22-.25-4.56-1.11-4.56-4.94a3.87 3.87 0 0 1 .97-2.68A3.6 3.6 0 0 1 .67 5s-.22-.7-.03-1c0 0 .84-.27 2.75 1a9.42 9.42 0 0 1 5 .001C13 .73 13 .27 13 .27c2 .73 2 .73 2 .73s-.22 .7-.03 .99a3.6 3.6 0 0 1 .97 2.68c0 3.83-2.35 4.68-4.58 4.93a1.67 1.67 0 0 1 .5 .99v1c0 .215 .15 .464 .55 .386A8 8 0 0 0 8 .001z"/&gt;
 &lt;/svg&gt;
 View this page on GitHub
 &lt;/a&gt;
&lt;/small&gt;
&lt;h2 id="openai-chat-completion-request"&gt;OpenAI Chat Completion Request &lt;a href="#openai-chat-completion-request" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;


 
 
 

 
 
 
 

 

 &lt;div class="prism-codeblock "&gt;
 &lt;pre id="c01d77e" class="language-python "&gt;
 &lt;code&gt;client = OpenAI(api_key=&amp;#34;&amp;lt;OPENAI_API_TOKEN&amp;gt;&amp;#34;)

# Create a request to the Chat Completions endpoint
response = client.chat.completions.create(
 model=&amp;#34;gpt-4o-mini&amp;#34;,
 messages=[{&amp;#34;role&amp;#34;: &amp;#34;user&amp;#34;, &amp;#34;content&amp;#34;: &amp;#34;In the context of marketing for a new Italian family-style restaurant, generate a marketing slogan to be used in marketing materials including a website and print.&amp;#34;}],
 max_completion_tokens=100
)

print(response.choices[0].message.content)&lt;/code&gt;
 &lt;/pre&gt;
 &lt;/div&gt;
&lt;h2 id="classification"&gt;Classification &lt;a href="#classification" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;h4 id="zero-shot"&gt;Zero Shot &lt;a href="#zero-shot" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h4&gt;&lt;p&gt;Using a &amp;ldquo;Zero Shot&amp;rdquo; prompt (don&amp;rsquo;t include any examples in the context), generate a sentiment classification of product reviews.&lt;/p&gt;</description></item><item><title>Notes &amp; Code Snippets</title><link>https://ai.danfanderson.com/docs/courses-workshops/packt/ml-and-genai-sys-design-wshop/</link><pubDate>Wed, 25 Feb 2026 18:50:03 -0600</pubDate><guid>https://ai.danfanderson.com/docs/courses-workshops/packt/ml-and-genai-sys-design-wshop/</guid><description>&lt;h2 id="notes"&gt;Notes &lt;a href="#notes" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;</description></item><item><title>Vibe Coding Notes</title><link>https://ai.danfanderson.com/docs/vibe-coding/vibe-coding-notes/</link><pubDate>Sun, 22 Feb 2026 17:55:03 -0600</pubDate><guid>https://ai.danfanderson.com/docs/vibe-coding/vibe-coding-notes/</guid><description>&lt;hr&gt;
&lt;h2 id="links--resources"&gt;Links &amp;amp; Resources &lt;a href="#links--resources" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Adal Coding Agent&lt;/strong&gt;: &lt;a href="https://sylph.ai/" rel="external" target="_blank"&gt;https://sylph.ai/&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Zach Wilson&amp;rsquo;s Vibe Coding Bootcamp&lt;/strong&gt;:
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Day 1&lt;/em&gt;: &lt;a href="https://youtube.com/live/BSN0uwavP4I" rel="external" target="_blank"&gt;https://youtube.com/live/BSN0uwavP4I&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Day 2&lt;/em&gt;: &lt;a href="https://www.youtube.com/live/pxUMU755uaI" rel="external" target="_blank"&gt;https://www.youtube.com/live/pxUMU755uaI&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id="adal-prompts"&gt;Adal Prompts &lt;a href="#adal-prompts" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;Prompts I have used&lt;/p&gt;</description></item><item><title>ChatGPT | Agentic AI Payment Protocols</title><link>https://ai.danfanderson.com/docs/supporting-pages/agentic-ai-payment-protocols-chatgpt/</link><pubDate>Fri, 23 Jan 2026 13:55:03 -0600</pubDate><guid>https://ai.danfanderson.com/docs/supporting-pages/agentic-ai-payment-protocols-chatgpt/</guid><description>&lt;p&gt;Here’s a clear overview of the &lt;strong&gt;major agentic payment standards&lt;/strong&gt; emerging in 2025–2026, the current state of adoption, and &lt;strong&gt;official sources/links&lt;/strong&gt; you can consult for each.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="-1-agentic-commerce-protocol-acp"&gt;🧭 1. &lt;strong&gt;Agentic Commerce Protocol (ACP)&lt;/strong&gt; &lt;a href="#-1-agentic-commerce-protocol-acp" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;What it is:&lt;/strong&gt;
An &lt;strong&gt;open commerce standard&lt;/strong&gt; for AI agents to complete purchases on behalf of users, focused on checkout and payment interactions using existing payment rails (credit/debit cards, tokenization). It defines how agents discover products, invoke secure checkouts, and pass scoped payment tokens without exposing raw card details. (&lt;a href="https://developers.openai.com/commerce/?utm_source=chatgpt.com" rel="external" target="_blank" title="Agentic Commerce"&gt;OpenAI Developer Docs&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;)&lt;/p&gt;</description></item><item><title>Bootcamp Notes</title><link>https://ai.danfanderson.com/docs/courses-workshops/dataexpert-io-ai-eng-bootcamp/dataexpert-io-ae-bootcamp-notes/</link><pubDate>Tue, 21 Oct 2025 17:55:03 -0600</pubDate><guid>https://ai.danfanderson.com/docs/courses-workshops/dataexpert-io-ai-eng-bootcamp/dataexpert-io-ae-bootcamp-notes/</guid><description>&lt;hr&gt;
&lt;small&gt;
 &lt;a href="https://github.com/danoand/danodocs-ai/blob/master/content/docs/courses-workshops/dataexpert-io-ai-eng-bootcamp/dataexpert-io-ae-bootcamp-notes.md" target="_blank" rel="noopener noreferrer"&gt;
 &lt;svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-github" viewBox="0 0 16 16"&gt;
 &lt;path d="M8 0a8 8 0 0 0-2.53 15.588c.4.074.55-.174.55-.386v-1.42c-2.24.486-2.71-1.07-2.71-1.07-.366-.93-.894-1.176-.894-1.176-.73-.5.056-.49.056-.49.807.056 1.303.83 1.303.83.716 1.23 1.88.875 2.34.669a1.67 1.67 0 0 1 .5-1c-2.22-.25-4.56-1.11-4.56-4.94a3.87 3.87 0 0 1 .97-2.68A3.6 3.6 0 0 1 .67 5s-.22-.7-.03-1c0 0 .84-.27 2.75 1a9.42 9.42 0 0 1 5 .001C13 .73 13 .27 13 .27c2 .73 2 .73 2 .73s-.22 .7-.03 .99a3.6 3.6 0 0 1 .97 2.68c0 3.83-2.35 4.68-4.58 4.93a1.67 1.67 0 0 1 .5 .99v1c0 .215 .15 .464 .55 .386A8 8 0 0 0 8 .001z"/&gt;
 &lt;/svg&gt;
 View this page on GitHub
 &lt;/a&gt;
&lt;/small&gt;
&lt;h2 id="setting-up-python-environment"&gt;Setting Up Python Environment &lt;a href="#setting-up-python-environment" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;Use a Python 3.11 environment - later code samples will not support 3.13&lt;/li&gt;
&lt;/ul&gt;



 
 
 

 
 
 
 

 

 &lt;div class="prism-codeblock "&gt;
 &lt;pre id="3f471d4" class="language-bash "&gt;
 &lt;code&gt;# navigate to your project folder
cd your-folder
# set up a Python 3.11 virtual environment in the `venvs` folder in your home folder: `~/venvs/py311env`
/opt/homebrew/opt/python@3.11/bin/python3.11 -m venv ~/venvs/py311env

# activate the virtual environment for this session - so python modules will be installed here (for the project)
source ~/venvs/py311env/bin/activate

# install Python packages
pip install -r requirements.txt&lt;/code&gt;
 &lt;/pre&gt;
 &lt;/div&gt;
&lt;p&gt;From the repo readme. Start the web application&lt;/p&gt;</description></item><item><title>Glossary</title><link>https://ai.danfanderson.com/docs/glossary/</link><pubDate>Wed, 19 Feb 2025 15:11:34 -0600</pubDate><guid>https://ai.danfanderson.com/docs/glossary/</guid><description>&lt;hr&gt;
&lt;small&gt;
 &lt;a href="https://github.com/danoand/danodocs-ai/blob/master/content/docs/glossary.md" target="_blank" rel="noopener noreferrer"&gt;
 &lt;svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-github" viewBox="0 0 16 16"&gt;
 &lt;path d="M8 0a8 8 0 0 0-2.53 15.588c.4.074.55-.174.55-.386v-1.42c-2.24.486-2.71-1.07-2.71-1.07-.366-.93-.894-1.176-.894-1.176-.73-.5.056-.49.056-.49.807.056 1.303.83 1.303.83.716 1.23 1.88.875 2.34.669a1.67 1.67 0 0 1 .5-1c-2.22-.25-4.56-1.11-4.56-4.94a3.87 3.87 0 0 1 .97-2.68A3.6 3.6 0 0 1 .67 5s-.22-.7-.03-1c0 0 .84-.27 2.75 1a9.42 9.42 0 0 1 5 .001C13 .73 13 .27 13 .27c2 .73 2 .73 2 .73s-.22 .7-.03 .99a3.6 3.6 0 0 1 .97 2.68c0 3.83-2.35 4.68-4.58 4.93a1.67 1.67 0 0 1 .5 .99v1c0 .215 .15 .464 .55 .386A8 8 0 0 0 8 .001z"/&gt;
 &lt;/svg&gt;
 View this page on GitHub
 &lt;/a&gt;
&lt;/small&gt;
&lt;hr&gt;
&lt;hr&gt;
&lt;h2 id="general-links-and-resources"&gt;General Links and Resources &lt;a href="#general-links-and-resources" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Zach Wilson / DataExpert.io Github &amp;ldquo;Handbooks&amp;rdquo;&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/DataExpert-io/data-engineer-handbook" rel="external" target="_blank"&gt;Data Engineering Handbook&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/DataExpert-io/ai-engineer-handbook" rel="external" target="_blank"&gt;AI Engineering Handbook&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/DataExpert-io/analytics-engineer-handbook" rel="external" target="_blank"&gt;Analytics Engineering Handbook&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Chip Huyen AI Engieering Book Companion Site&lt;/strong&gt;: &lt;a href="https://github.com/chiphuyen/aie-book" rel="external" target="_blank"&gt;https://github.com/chiphuyen/aie-book&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Thomas Yeh&lt;/strong&gt;: &lt;a href="https://www.byhand.ai" rel="external" target="_blank"&gt;https://www.byhand.ai&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt; substack and online courses
&lt;ul&gt;
&lt;li&gt;you are subscribed to his/their Substack newsletter&lt;/li&gt;
&lt;li&gt;He has a free Agentic AI intro course: &lt;a href="https://www.byhand.ai/p/introduction-to-agentic-ai" rel="external" target="_blank"&gt;https://www.byhand.ai/p/introduction-to-agentic-ai&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Check out the course you have joined here: &lt;a href="https://community.genai.works/" rel="external" target="_blank"&gt;https://community.genai.works/&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;MEAP Book (writing in progress) &lt;a href="https://www.manning.com/books/build-a-multi-agent-system-from-scratch" rel="external" target="_blank"&gt;https://www.manning.com/books/build-a-multi-agent-system-from-scratch&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="alert alert-success d-flex" role="alert"&gt;
 &lt;div class="flex-shrink-1 alert-icon"&gt;
 
 &lt;span class="material-icons size-20 me-2"&gt;
 check_circle
 &lt;/span&gt;&lt;/div&gt;
 
 &lt;div class="w-100"&gt;&lt;!-- raw HTML omitted --&gt;NOTE:&lt;!-- raw HTML omitted --&gt; This should be the link to see the content of the free Agentic AI course: &lt;a href="https://community.genai.works/spaces/18408418/content" rel="external" target="_blank"&gt;https://community.genai.works/spaces/18408418/content&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/div&gt;
 &lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;L402.org: AI Agent payment protocol (?)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.nvidia.com/nemo-framework/index.html" rel="external" target="_blank"&gt;https://docs.nvidia.com/nemo-framework/index.html&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;: NVIDIA Nemo Framework. From DataExpert.io course. Use to implement Guardrails?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;MCP Integration&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://modelcontextprotocol.io" rel="external" target="_blank"&gt;https://modelcontextprotocol.io&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developers.openai.com/api/docs/mcp" rel="external" target="_blank"&gt;https://developers.openai.com/api/docs/mcp&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Agent Skills or Skills: &lt;a href="https://agentskills.io" rel="external" target="_blank"&gt;https://agentskills.io&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="-a-"&gt;&amp;ndash; A &amp;ndash; &lt;a href="#-a-" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;A2A&lt;/strong&gt;: Agent to Agent interactions/transactions - example: A research agent pays a summarization agent
&lt;ul&gt;
&lt;li&gt;related:
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;H2A&lt;/strong&gt;: Human to Agent - example: User pays ChatGPT or invokes a tool&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;A2H&lt;/strong&gt;: Agent-to-Human - example: Agent pays a contractor or requests approval&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;A2S&lt;/strong&gt;: Agent-to-Service - example: Agent pays an API or SaaS endpoint&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;B2B&lt;/strong&gt; / &lt;strong&gt;P2P&lt;/strong&gt;: traditional payments (non autonomous)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;A2A systems require:
&lt;ul&gt;
&lt;li&gt;Machine-verifiable identity&lt;/li&gt;
&lt;li&gt;Programmatic authorization&lt;/li&gt;
&lt;li&gt;Instant or near-instant settlement&lt;/li&gt;
&lt;li&gt;Micropayments&lt;/li&gt;
&lt;li&gt;Deterministic execution&lt;/li&gt;
&lt;li&gt;Cryptographic guarantees&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Agent Skills&lt;/strong&gt;: A simple, open format for giving agents new capabilities and expertise. Agent Skills are folders of instructions, scripts, and resources that agents can discover and use to do things more accurately and efficiently
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Difference between Skills &amp;amp; Tools&lt;/em&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Tools&lt;/strong&gt; are the low-level primitives an agent can call (APIs/functions like “read file,” “send message,” “run shell command,” “web search”). They’re usually narrow, capability-oriented, and runtime-provided.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Agent skills&lt;/strong&gt; are higher-level packages of behavior that use tools (and add procedure, guardrails, prompts, code, tests, configs) to reliably accomplish a task end-to-end (like “triage inbox,” “generate social post + schedule,” “audit SSH hardening”).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;In short&lt;/strong&gt;: tools = &lt;code&gt;what the agent can do&lt;/code&gt;; skills = &lt;code&gt;how the agent should do a specific job (often by orchestrating tools)&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Agentic AI&lt;/strong&gt;: &lt;em&gt;Yeh Course&lt;/em&gt; - Agentic AI takes artificial intelligence beyond simple prompt-based responses. Unlike traditional models, agents can observe, reflect, use tools, plan ahead, and even collaborate—just like a human problem solver&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Agentic AI - Payment Protocols&lt;/strong&gt;: (relatively) early protocols addressing different types AI Agent payment use cases. Listed below are major protocol alternatives:
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Agentic Commerce Protocol (ACP)&lt;/em&gt;: An open commerce standard for AI agents to complete purchases on behalf of users, focused on checkout and payment interactions using existing payment rails (credit/debit cards, tokenization)
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://developers.openai.com/commerce/" rel="external" target="_blank"&gt;https://developers.openai.com/commerce/&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Agent Payments Protocol (AP2)&lt;/em&gt;: An open standard for secure, interoperable agent-initiated payments, focused on authorization, trust, and auditability of payments initiated by AI agents. AP2 uses cryptographically signed “mandates” to ensure user intent and accountability for transactions
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://ap2-protocol.org" rel="external" target="_blank"&gt;https://ap2-protocol.org&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;em&gt;x402: Internet-Native Payments Standard&lt;/em&gt;: An HTTP-native payment protocol that revives HTTP 402 (“Payment Required”) to enable autonomous payments between clients/agents and services. When a service requires payment, it responds with 402 and machine-readable payment instructions; the client pays and retries
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.x402.org" rel="external" target="_blank"&gt;https://www.x402.org&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Universal Commerce Protocol (UCP)&lt;/em&gt;: An emerging standard. A broader open commerce standard announced by Google and partners (2026) designed to unify multiple protocols across discovery → purchase → post-purchase, aimed at interoperability between agents and commerce endpoints. It complements AP2, ACP, A2A, and MCP rather than directly replacing them
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.x402.org/?utm_source=chatgpt.com" rel="external" target="_blank"&gt;https://www.x402.org/?utm_source=chatgpt.com&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;See: so &lt;a href="https://ai.danfanderson.com/docs/supporting-pages/agentic-ai-payment-protocols-chatgpt/"&gt;ChatGPT Response&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;AI Engineering&lt;/strong&gt;: process of building applications leveraging readily available models&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;activation function&lt;/strong&gt;: activation functions prevent models from grounding neural network models down to linear models - ie. they introduce non linear dynamics (DA: otherwise the model just defaults to a weighted sum plus bias expression [?])&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ASR&lt;/strong&gt;: Automatic Speech Recognition&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="-b-"&gt;&amp;ndash; B &amp;ndash; &lt;a href="#-b-" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Backpropagation&lt;/strong&gt;: DA | a training approach to iteratively adjust weights and bias values using an error correction algorithm such as gradient descent&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="-c-"&gt;&amp;ndash; C &amp;ndash; &lt;a href="#-c-" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Chain of Thought (COT)&lt;/strong&gt;: explicitly asking the model to think step by step (&amp;ldquo;think step by step&amp;rdquo;, &amp;ldquo;explain your decision&amp;rdquo;)
&lt;ul&gt;
&lt;li&gt;used for complex reasonging task&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Chaining&lt;/strong&gt;: breaking down prompts or model interactions into serial steps (e.g. as opposed to one encompassing prompt)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Completion&lt;/strong&gt;: the output of an LLM that is result of processing a prompt (e.g. &lt;code&gt;Go is...&lt;/code&gt;) with a completion of &lt;code&gt;a programming language...&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Context Window&lt;/strong&gt;: how big your prompt can be that is input to the model
&lt;ul&gt;
&lt;li&gt;Most models have limits of ~2000 tokens&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Cosine Similarity&lt;/strong&gt;: &lt;em&gt;From ChatGPT&lt;/em&gt; Cosine Similarity is a measure of similarity between two vectors in a high-dimensional space, commonly used in LLMs for text embeddings and semantic search. It calculates the cosine of the angle between two vectors, where 1 means identical, 0 means unrelated, and -1 means completely opposite&lt;/li&gt;
&lt;/ul&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Concept&lt;/th&gt;
 &lt;th&gt;Description&lt;/th&gt;
 &lt;th&gt;Operator/Syntax&lt;/th&gt;
 &lt;th&gt;Notes&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;Cosine Similarity&lt;/td&gt;
 &lt;td&gt;A measure of how one vector&amp;rsquo;s direction is similar to another vector&amp;rsquo;s direction. In Postgres &lt;code&gt;pgvector&lt;/code&gt; you get this by subtracting the &lt;code&gt;Cosine Distance&lt;/code&gt; from &lt;code&gt;1&lt;/code&gt;&lt;/td&gt;
 &lt;td&gt;pgvector: &lt;code&gt;1 - (A &amp;lt;=&amp;gt; B)&lt;/code&gt;&lt;/td&gt;
 &lt;td&gt;Values: &lt;code&gt;-1&lt;/code&gt; to &lt;code&gt;1&lt;/code&gt; (higher value means most similar)&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Cosine Distance&lt;/td&gt;
 &lt;td&gt;Also a measure of how one vector differs from another in terms of its direction&lt;/td&gt;
 &lt;td&gt;pgvector: &lt;code&gt;A &amp;lt;=&amp;gt; B&lt;/code&gt;&lt;/td&gt;
 &lt;td&gt;Values: &lt;code&gt;0&lt;/code&gt; to &lt;code&gt;2&lt;/code&gt; (lower value means most similar)&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="-d-"&gt;&amp;ndash; D &amp;ndash; &lt;a href="#-d-" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;dspy&lt;/strong&gt;: &lt;code&gt;dspy&lt;/code&gt; (short for Declarative Self-Improving Language Models) is an open-source Python framework for building self-improving LLM applications.
&lt;ul&gt;
&lt;li&gt;It allows engineers to define AI tasks declaratively and then automatically optimize, test, and refine the prompts, parameters, and reasoning strategies that an LLM uses to complete those tasks&lt;/li&gt;
&lt;li&gt;High-Level dspy Workflow
&lt;ol&gt;
&lt;li&gt;You define a task module/signature.&lt;/li&gt;
&lt;li&gt;You provide a dataset of (input → correct output) examples.&lt;/li&gt;
&lt;li&gt;You choose an optimizer (MIPRO, BootstrapFewShot, COPRO, etc.).&lt;/li&gt;
&lt;li&gt;&lt;code&gt;dspy&lt;/code&gt; tests many versions of system prompts internally.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;dspy&lt;/code&gt; selects the best-performing prompt/module configuration.&lt;/li&gt;
&lt;li&gt;You freeze/export the optimized program for production use.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="-e-"&gt;&amp;ndash; E &amp;ndash; &lt;a href="#-e-" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;embeddings&lt;/strong&gt;: vector that aim to capture the meanings of the orginal text (from which the embedding was calculated)
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;From ChatGPT&lt;/em&gt; An embedding is the dense vector form that represents a concept in a model’s “understanding space&amp;quot;
&lt;ul&gt;
&lt;li&gt;embeddings encode meaning rather than direct word identity.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="-3-summary-of-sparse-vector-vs-embedding"&gt;🧠 3. &lt;strong&gt;Summary of Sparse Vector vs. Embedding&lt;/strong&gt; &lt;a href="#-3-summary-of-sparse-vector-vs-embedding" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h3&gt;&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Feature&lt;/th&gt;
 &lt;th&gt;Sparse Vector&lt;/th&gt;
 &lt;th&gt;Dense Vector / Embedding&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Size&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;= vocabulary size (can be 10k–1M+)&lt;/td&gt;
 &lt;td&gt;Typically 256–4096&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Meaning of each position&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;Specific word or feature&lt;/td&gt;
 &lt;td&gt;Learned, abstract dimension&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Zeros&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;Mostly zeros&lt;/td&gt;
 &lt;td&gt;Mostly non-zero&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Captures semantics?&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;No&lt;/td&gt;
 &lt;td&gt;Yes&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Memory efficiency&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;Poor&lt;/td&gt;
 &lt;td&gt;High&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Used by&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;TF-IDF, one-hot, bag-of-words&lt;/td&gt;
 &lt;td&gt;Neural nets, LLMs, embeddings&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;hr&gt;
&lt;p&gt;In short:&lt;/p&gt;</description></item><item><title>x402 | Conceptual Flow</title><link>https://ai.danfanderson.com/docs/supporting-pages/x402-conceptual-flow/</link><pubDate>Fri, 23 Jan 2026 13:55:03 -0600</pubDate><guid>https://ai.danfanderson.com/docs/supporting-pages/x402-conceptual-flow/</guid><description>&lt;h1 id="0-links-to-check-out"&gt;0️⃣ Links to Check Out &lt;a href="#0-links-to-check-out" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h1&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="https://youtu.be/Jn2EWtQcGyk?si=mKOkj8cl-mmdbMqN" rel="external" target="_blank"&gt;The DEFINITIVE guide to x402 (ft. Erik Reppel) - Coinbase Developer Podcast&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h1 id="1-a2a-agent-to-agent--the-intent-layer"&gt;1️⃣ A2A (Agent-to-Agent) — &lt;em&gt;The Intent Layer&lt;/em&gt; &lt;a href="#1-a2a-agent-to-agent--the-intent-layer" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h1&gt;&lt;p&gt;&lt;strong&gt;A2A answers: &lt;em&gt;Who is talking to whom, and why?&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;</description></item><item><title>MCP Servers &amp; Governance</title><link>https://ai.danfanderson.com/docs/supporting-pages/mcp-notes/</link><pubDate>Fri, 30 Jan 2026 13:55:03 -0600</pubDate><guid>https://ai.danfanderson.com/docs/supporting-pages/mcp-notes/</guid><description>&lt;hr&gt;
&lt;small&gt;
 &lt;a href="https://github.com/danoand/danodocs-ai/blob/master/content/docs/supporting-pages/mcp-notes.md" target="_blank" rel="noopener noreferrer"&gt;
 &lt;svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-github" viewBox="0 0 16 16"&gt;
 &lt;path d="M8 0a8 8 0 0 0-2.53 15.588c.4.074.55-.174.55-.386v-1.42c-2.24.486-2.71-1.07-2.71-1.07-.366-.93-.894-1.176-.894-1.176-.73-.5.056-.49.056-.49.807.056 1.303.83 1.303.83.716 1.23 1.88.875 2.34.669a1.67 1.67 0 0 1 .5-1c-2.22-.25-4.56-1.11-4.56-4.94a3.87 3.87 0 0 1 .97-2.68A3.6 3.6 0 0 1 .67 5s-.22-.7-.03-1c0 0 .84-.27 2.75 1a9.42 9.42 0 0 1 5 .001C13 .73 13 .27 13 .27c2 .73 2 .73 2 .73s-.22 .7-.03 .99a3.6 3.6 0 0 1 .97 2.68c0 3.83-2.35 4.68-4.58 4.93a1.67 1.67 0 0 1 .5 .99v1c0 .215 .15 .464 .55 .386A8 8 0 0 0 8 .001z"/&gt;
 &lt;/svg&gt;
 View this page on GitHub
 &lt;/a&gt;
&lt;/small&gt;
&lt;p&gt;Expanding on notes captured on the &lt;a href="https://ai.danfanderson.com/docs/glossary/"&gt;Glossary page&lt;/a&gt;&lt;/p&gt;</description></item><item><title>Structured Outputs</title><link>https://ai.danfanderson.com/docs/courses-workshops/datacamp-ai-eng-for-devs/structured-outputs/</link><pubDate>Mon, 09 Mar 2026 15:32:35 -0500</pubDate><guid>https://ai.danfanderson.com/docs/courses-workshops/datacamp-ai-eng-for-devs/structured-outputs/</guid><description>&lt;hr&gt;
&lt;small&gt;
 &lt;a href="https://github.com/danoand/danodocs-ai/blob/master/content/docs/courses-workshops/datacamp-ai-eng-for-devs/structured-outputs.md" target="_blank" rel="noopener noreferrer"&gt;
 &lt;svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-github" viewBox="0 0 16 16"&gt;
 &lt;path d="M8 0a8 8 0 0 0-2.53 15.588c.4.074.55-.174.55-.386v-1.42c-2.24.486-2.71-1.07-2.71-1.07-.366-.93-.894-1.176-.894-1.176-.73-.5.056-.49.056-.49.807.056 1.303.83 1.303.83.716 1.23 1.88.875 2.34.669a1.67 1.67 0 0 1 .5-1c-2.22-.25-4.56-1.11-4.56-4.94a3.87 3.87 0 0 1 .97-2.68A3.6 3.6 0 0 1 .67 5s-.22-.7-.03-1c0 0 .84-.27 2.75 1a9.42 9.42 0 0 1 5 .001C13 .73 13 .27 13 .27c2 .73 2 .73 2 .73s-.22 .7-.03 .99a3.6 3.6 0 0 1 .97 2.68c0 3.83-2.35 4.68-4.58 4.93a1.67 1.67 0 0 1 .5 .99v1c0 .215 .15 .464 .55 .386A8 8 0 0 0 8 .001z"/&gt;
 &lt;/svg&gt;
 View this page on GitHub
 &lt;/a&gt;
&lt;/small&gt;
&lt;h2 id="structured-outputs---notes"&gt;Structured Outputs - Notes &lt;a href="#structured-outputs---notes" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;h3 id="generating-a-table"&gt;Generating a Table &lt;a href="#generating-a-table" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h3&gt;


 
 
 

 
 
 
 

 

 &lt;div class="prism-codeblock "&gt;
 &lt;pre id="ff9e017" class="language-python "&gt;
 &lt;code&gt;client = OpenAI(api_key=&amp;#34;&amp;lt;OPENAI_API_TOKEN&amp;gt;&amp;#34;)

# Create a prompt that generates the table
prompt = &amp;#34;&amp;#34;&amp;#34;
Identify the ten &amp;#34;must read&amp;#34; science fiction books to be displayed on the homepage of a leading online bookstore with an extensive collection of science novels.

Output the list of books in a table with ten rows representing each book and include these columns: `Title`, `Author`, `Year`. Output ten books in the generated table.
&amp;#34;&amp;#34;&amp;#34;

# Get the response
response = get_response(prompt)
print(response)&lt;/code&gt;
 &lt;/pre&gt;
 &lt;/div&gt;
&lt;h3 id="using-format-strings-to-build-prompts"&gt;Using format strings to build prompts &lt;a href="#using-format-strings-to-build-prompts" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h3&gt;


 
 
 

 
 
 
 

 

 &lt;div class="prism-codeblock "&gt;
 &lt;pre id="c6f8340" class="language-python "&gt;
 &lt;code&gt;client = OpenAI(api_key=&amp;#34;&amp;lt;OPENAI_API_TOKEN&amp;gt;&amp;#34;)

# Create the output format
output_format = &amp;#34;Output three separate lines which will display the text copy, language detected, and the title generated. Those lines should be prefixed with `Text:`, `Language:`, and `Title:` respectively.&amp;#34;

# Create the overall instructions referencing variable components
instructions = f&amp;#34;&amp;#34;&amp;#34;
Analyse the text listed below delimited by triple backticks. The analysis should detect the language of the delimited text and generate a short title for the text content.

The text to analyze is:
```
{text}
```

Generate tht output using the following format:
{output_format}
&amp;#34;&amp;#34;&amp;#34;

# Create the final prompt
prompt = instructions.format(text=text, output_format=output_format)
response = get_response(prompt)
print(response)&lt;/code&gt;
 &lt;/pre&gt;
 &lt;/div&gt;
&lt;h3 id="using-conditional-prompt-language"&gt;Using conditional prompt language &lt;a href="#using-conditional-prompt-language" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h3&gt;


 
 
 

 
 
 
 

 

 &lt;div class="prism-codeblock "&gt;
 &lt;pre id="552cfda" class="language-python "&gt;
 &lt;code&gt;client = OpenAI(api_key=&amp;#34;&amp;lt;OPENAI_API_TOKEN&amp;gt;&amp;#34;)

# Create the instructions
instructions = &amp;#34;Analyze the text listed below delimited by triple backticks. Determine the language of the text and the number of sentences in the text snippet. If the text snippet consists of more than one sentence then generate an appropriate short title for the text paragraph, otherwise generate a title string of `N/A`.&amp;#34;

# Create the output format
output_format = &amp;#34;Output the analysis as separate lines. Output a line displaying the text snippet prepended by the string `Text:`, a second line displaying the identified language prepended by string `Language:` and a third and final line displaying the generated title prepended by string `Title:`&amp;#34;

prompt = instructions &amp;#43; output_format &amp;#43; f&amp;#34;```{text}```&amp;#34;
response = get_response(prompt)
print(response)&lt;/code&gt;
 &lt;/pre&gt;
 &lt;/div&gt;



 
 
 

 
 
 
 

 

 &lt;div class="prism-codeblock "&gt;
 &lt;pre id="3ee4ab6" class="language-python "&gt;
 &lt;code&gt;client = OpenAI(api_key=&amp;#34;&amp;lt;OPENAI_API_TOKEN&amp;gt;&amp;#34;)

# Create the instructions
# Use analogous if-then-else language to define a conditional prompt element
instructions = &amp;#34;Analyze the text listed below delimited by triple backticks. Determine the language of the text and the number of sentences in the text snippet. If the text snippet consists of more than one sentence then generate an appropriate short title for the text paragraph, otherwise generate a title string of `N/A`.&amp;#34;

# Create the output format
output_format = &amp;#34;Output the analysis as separate lines. Output a line displaying the text snippet prepended by the string `Text:`, a second line displaying the identified language prepended by string `Language:` and a third and final line displaying the generated title prepended by string `Title:`&amp;#34;

prompt = instructions &amp;#43; output_format &amp;#43; f&amp;#34;```{text}```&amp;#34;
response = get_response(prompt)
print(response)&lt;/code&gt;
 &lt;/pre&gt;
 &lt;/div&gt;
&lt;h2 id="chain-of-thought-prompting"&gt;&amp;ldquo;Chain of Thought&amp;rdquo; Prompting &lt;a href="#chain-of-thought-prompting" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;Prompt example&lt;/strong&gt;&lt;/p&gt;</description></item><item><title>Assignment #1</title><link>https://ai.danfanderson.com/docs/courses-workshops/dataexpert-io-ai-eng-bootcamp/dataexpert-io-ai-eng-assignment-1/</link><pubDate>Mon, 10 Nov 2025 17:55:03 -0600</pubDate><guid>https://ai.danfanderson.com/docs/courses-workshops/dataexpert-io-ai-eng-bootcamp/dataexpert-io-ai-eng-assignment-1/</guid><description>&lt;hr&gt;
&lt;small&gt;
 &lt;a href="https://github.com/danoand/danodocs-ai/blob/master/content/docs/courses-workshops/dataexpert-io-ai-eng-bootcamp/dataexpert-io-ai-eng-assignment-1.md" target="_blank" rel="noopener noreferrer"&gt;
 &lt;svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-github" viewBox="0 0 16 16"&gt;
 &lt;path d="M8 0a8 8 0 0 0-2.53 15.588c.4.074.55-.174.55-.386v-1.42c-2.24.486-2.71-1.07-2.71-1.07-.366-.93-.894-1.176-.894-1.176-.73-.5.056-.49.056-.49.807.056 1.303.83 1.303.83.716 1.23 1.88.875 2.34.669a1.67 1.67 0 0 1 .5-1c-2.22-.25-4.56-1.11-4.56-4.94a3.87 3.87 0 0 1 .97-2.68A3.6 3.6 0 0 1 .67 5s-.22-.7-.03-1c0 0 .84-.27 2.75 1a9.42 9.42 0 0 1 5 .001C13 .73 13 .27 13 .27c2 .73 2 .73 2 .73s-.22 .7-.03 .99a3.6 3.6 0 0 1 .97 2.68c0 3.83-2.35 4.68-4.58 4.93a1.67 1.67 0 0 1 .5 .99v1c0 .215 .15 .464 .55 .386A8 8 0 0 0 8 .001z"/&gt;
 &lt;/svg&gt;
 View this page on GitHub
 &lt;/a&gt;
&lt;/small&gt;
&lt;h2 id="homework-1"&gt;Homework #1 &lt;a href="#homework-1" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="https://ai.danfanderson.com/files/assignment-1.zip"&gt;Homework #1 Submission zip file&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id="prompt-engineering"&gt;Prompt Engineering &lt;a href="#prompt-engineering" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h4&gt;


 
 
 

 
 
 
 

 

 &lt;div class="prism-codeblock "&gt;
 &lt;pre id="a5085c4" class="language-markdown "&gt;
 &lt;code&gt;# Most Frequent Character in a Country Name

This analysis identifies the country whose official name contains the most frequently repeated letter. 
Because “official” country lists vary across sources, the dataset used here is the **United Nations list of sovereign states**, as published on [Wikipedia](https://en.wikipedia.org/wiki/List_of_sovereign_states#List_of_states).

The following Python script counts letters in each country name (case-insensitive, ignoring spaces and punctuation) and reports the country with the single most frequently repeated character.

```python
def char_count(name: str):
 &amp;#34;&amp;#34;&amp;#34;
 Return (most_frequent_char, count, error_or_none) for a given country name.
 Counts letters only, case-insensitive.
 &amp;#34;&amp;#34;&amp;#34;
 if not isinstance(name, str) or not name:
 return None, 0, &amp;#34;err: passed an empty string or non-string value&amp;#34;

 counts = {}
 for ch in name.lower():
 # Use .isalpha() to ignore spaces, commas, etc.
 # Remove this check if you want to count all characters
 if ch.isalpha():
 counts[ch] = counts.get(ch, 0) &amp;#43; 1

 if not counts:
 return None, 0, &amp;#34;err: no countable characters&amp;#34;

 # Pick the (char, count) pair with the highest count
 most_char, max_count = max(counts.items(), key=lambda kv: kv[1])
 return most_char, max_count, None


def main():
 countries = [
 &amp;#34;Afghanistan&amp;#34;, &amp;#34;Albania&amp;#34;, &amp;#34;Algeria&amp;#34;, &amp;#34;Andorra&amp;#34;, &amp;#34;Angola&amp;#34;,
 &amp;#34;Antigua and Barbuda&amp;#34;, &amp;#34;Argentina&amp;#34;, &amp;#34;Armenia&amp;#34;, &amp;#34;Australia&amp;#34;,
 &amp;#34;United States of America&amp;#34;, &amp;#34;Uruguay&amp;#34;, &amp;#34;Uzbekistan&amp;#34;, &amp;#34;Vanuatu&amp;#34;,
 &amp;#34;Venezuela, Bolivarian Republic of&amp;#34;, &amp;#34;Viet Nam&amp;#34;, &amp;#34;Yemen&amp;#34;,
 &amp;#34;Zambia&amp;#34;, &amp;#34;Zimbabwe&amp;#34;
 ]

 max_country = None
 max_char = None
 max_count = -1

 for country in countries:
 ch, cnt, err = char_count(country)
 if err is None and cnt &amp;gt; max_count:
 max_country, max_char, max_count = country, ch, cnt

 print(
 &amp;#34;Most frequently repeated character\n&amp;#34;
 f&amp;#34;Country: {max_country}\nChar: {max_char}\nNum: {max_count}&amp;#34;
 )


if __name__ == &amp;#34;__main__&amp;#34;:
 main()
```

## Output

The script identified **“United Kingdom of Great Britain and Northern Ireland”** as the country name with the most repeated letter. 
The letter **“n”** appears **seven times**.

```
Most frequently repeated character
Country: United Kingdom of Great Britain and Northern Ireland
Char: n
Num: 7
```

---

# Prompt &amp;amp; Model Experimentation

To evaluate how different LLMs handled this question, I tested multiple prompts and models using the OpenAI API. 
The experiment compared various phrasing strategies and model versions to measure accuracy and consistency.

## Models Tested

| Model | Accuracy | Required Source Prompt | Notes |
|--------|-----------|-----------------------|--------|
| GPT-4 | ❌ Often incorrect | ✅ Yes | Miscounted or inconsistent results |
| GPT-4o | ❌ Similar to GPT-4 | ✅ Yes | Slightly improved consistency |
| GPT-5 | ✅ Correct | ✅ Yes | Matched expected answer consistently |

## Prompt Characteristics

1. A simple text prompt asking the core question. 
2. Extended prompt instructing to treat vowels and consonants equally, and include multi-word names. 
3. Further extension emphasizing inclusion of stop words (e.g., *the*, *of*, *and*). 
4. More detailed instructions including a step-by-step task list. 
5. Prompts directing the model to use the **United Nations Member States** list as the official country source.

---

# Discussion on Results

### Performance by Model

Older models such as GPT‑4 and GPT‑4o performed inconsistently. In most cases, they produced incorrect results or miscounted letters. 
GPT‑5, by contrast, returned accurate and consistent results—especially when explicitly prompted to reference the UN Member States list.

### Why GPT‑5 Succeeded

The GPT‑5 model responded correctly across multiple prompt variations. 
The most reliable answers came from prompts that:

* Used GPT‑5 
* Included a link to the UN website as the authoritative source 
* Clearly explained how to count letters, including conjunctions and prepositions 

This suggests that GPT‑5’s performance benefits from both precise task instructions and explicit grounding in a definitive dataset.

---

# Successful Prompts and Responses

Below are selected prompt–response pairs in JSON format.

```json
[
 {
 &amp;#34;model&amp;#34;: &amp;#34;gpt-4&amp;#34;,
 &amp;#34;category&amp;#34;: &amp;#34;g-promptLetterDescMinorWordsTaskList-WithSource&amp;#34;,
 &amp;#34;prompt_text&amp;#34;: &amp;#34;In the context of world geography, can you tell me what country has the same letter repeated the most in its name?...&amp;#34;,
 &amp;#34;prompt_resp&amp;#34;: &amp;#34;From my training data, the longest country name is &amp;#39;The United Kingdom of Great Britain and Northern Ireland&amp;#39;...&amp;#34;
 },
 {
 &amp;#34;model&amp;#34;: &amp;#34;gpt-5&amp;#34;,
 &amp;#34;category&amp;#34;: &amp;#34;b-promptSimple-WithSource&amp;#34;,
 &amp;#34;prompt_text&amp;#34;: &amp;#34;In the context of world geography, can you tell me what country has the same letter repeated the most in its name?...&amp;#34;,
 &amp;#34;prompt_resp&amp;#34;: &amp;#34;Short answer: United Kingdom of Great Britain and Northern Ireland...&amp;#34;
 }
]
```

---

# Summary

The country name **“United Kingdom of Great Britain and Northern Ireland”** contains the most frequently repeated letter (**n = 7**) among UN‑recognized sovereign states. 
Across multiple model generations, GPT‑5 consistently produced the correct result when given detailed instructions and a definitive country list source.&lt;/code&gt;
 &lt;/pre&gt;
 &lt;/div&gt;
&lt;h4 id="mainpy"&gt;&lt;code&gt;main.py&lt;/code&gt; &lt;a href="#mainpy" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h4&gt;&lt;p&gt;Update the &lt;code&gt;/api/parse-resume&lt;/code&gt; route handler&lt;/p&gt;</description></item><item><title>AI Applications</title><link>https://ai.danfanderson.com/docs/ai-applications/</link><pubDate>Sat, 22 Feb 2025 18:36:40 -0600</pubDate><guid>https://ai.danfanderson.com/docs/ai-applications/</guid><description>&lt;h2 id="planning-ai-applications"&gt;Planning AI Applications &lt;a href="#planning-ai-applications" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;hr&gt;
&lt;p&gt;From the book &lt;em&gt;AI Engineering&lt;/em&gt;&lt;/p&gt;
&lt;h4 id="use-case-evaluation"&gt;Use Case Evaluation &lt;a href="#use-case-evaluation" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h4&gt;&lt;p&gt;Question: why do you want to build this application?&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Competitive Pressure&lt;/strong&gt;: if you don&amp;rsquo;t build will competitors with AI driven capabilities make you obsolete?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Leverage Bigger / New Opportunities&lt;/strong&gt;: if you don&amp;rsquo;t build will you miss opportunities to drive increased revenue and profit; take advantage of new opportunities&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Understand New Technologies&lt;/strong&gt;: if not sure how AI impacts the business, at least get an understanding of the impact&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Think about the build vs buy paths&lt;/p&gt;</description></item><item><title>Assignment #2</title><link>https://ai.danfanderson.com/docs/courses-workshops/dataexpert-io-ai-eng-bootcamp/dataexpert-io-ai-eng-assignment-2/</link><pubDate>Sun, 23 Nov 2025 17:55:03 -0600</pubDate><guid>https://ai.danfanderson.com/docs/courses-workshops/dataexpert-io-ai-eng-bootcamp/dataexpert-io-ai-eng-assignment-2/</guid><description>&lt;hr&gt;
&lt;small&gt;
 &lt;a href="https://github.com/danoand/danodocs-ai/blob/master/content/docs/courses-workshops/dataexpert-io-ai-eng-bootcamp/dataexpert-io-ai-eng-assignment-2.md" target="_blank" rel="noopener noreferrer"&gt;
 &lt;svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-github" viewBox="0 0 16 16"&gt;
 &lt;path d="M8 0a8 8 0 0 0-2.53 15.588c.4.074.55-.174.55-.386v-1.42c-2.24.486-2.71-1.07-2.71-1.07-.366-.93-.894-1.176-.894-1.176-.73-.5.056-.49.056-.49.807.056 1.303.83 1.303.83.716 1.23 1.88.875 2.34.669a1.67 1.67 0 0 1 .5-1c-2.22-.25-4.56-1.11-4.56-4.94a3.87 3.87 0 0 1 .97-2.68A3.6 3.6 0 0 1 .67 5s-.22-.7-.03-1c0 0 .84-.27 2.75 1a9.42 9.42 0 0 1 5 .001C13 .73 13 .27 13 .27c2 .73 2 .73 2 .73s-.22 .7-.03 .99a3.6 3.6 0 0 1 .97 2.68c0 3.83-2.35 4.68-4.58 4.93a1.67 1.67 0 0 1 .5 .99v1c0 .215 .15 .464 .55 .386A8 8 0 0 0 8 .001z"/&gt;
 &lt;/svg&gt;
 View this page on GitHub
 &lt;/a&gt;
&lt;/small&gt;
&lt;h2 id="homework-2"&gt;Homework #2 &lt;a href="#homework-2" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="https://ai.danfanderson.com/files/assignment-2.zip"&gt;Homework #1 Submission zip file&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="brief-notes"&gt;Brief Notes &lt;a href="#brief-notes" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;See the zip file for the major code deliverables for this assignment including:&lt;/p&gt;</description></item><item><title>Assignment #3 - Capstone Project Proposal</title><link>https://ai.danfanderson.com/docs/courses-workshops/dataexpert-io-ai-eng-bootcamp/dataexpert-io-ai-eng-assignment-3-capstone/</link><pubDate>Sun, 23 Nov 2025 17:55:03 -0600</pubDate><guid>https://ai.danfanderson.com/docs/courses-workshops/dataexpert-io-ai-eng-bootcamp/dataexpert-io-ai-eng-assignment-3-capstone/</guid><description>&lt;hr&gt;
&lt;small&gt;
 &lt;a href="https://github.com/danoand/danodocs-ai/blob/master/content/docs/courses-workshops/dataexpert-io-ai-eng-bootcamp/dataexpert-io-ai-eng-assignment-3-capstone.md" target="_blank" rel="noopener noreferrer"&gt;
 &lt;svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-github" viewBox="0 0 16 16"&gt;
 &lt;path d="M8 0a8 8 0 0 0-2.53 15.588c.4.074.55-.174.55-.386v-1.42c-2.24.486-2.71-1.07-2.71-1.07-.366-.93-.894-1.176-.894-1.176-.73-.5.056-.49.056-.49.807.056 1.303.83 1.303.83.716 1.23 1.88.875 2.34.669a1.67 1.67 0 0 1 .5-1c-2.22-.25-4.56-1.11-4.56-4.94a3.87 3.87 0 0 1 .97-2.68A3.6 3.6 0 0 1 .67 5s-.22-.7-.03-1c0 0 .84-.27 2.75 1a9.42 9.42 0 0 1 5 .001C13 .73 13 .27 13 .27c2 .73 2 .73 2 .73s-.22 .7-.03 .99a3.6 3.6 0 0 1 .97 2.68c0 3.83-2.35 4.68-4.58 4.93a1.67 1.67 0 0 1 .5 .99v1c0 .215 .15 .464 .55 .386A8 8 0 0 0 8 .001z"/&gt;
 &lt;/svg&gt;
 View this page on GitHub
 &lt;/a&gt;
&lt;/small&gt;
&lt;h2 id="homework-3"&gt;Homework #3 &lt;a href="#homework-3" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;&lt;a href="https://ai.danfanderson.com/files/proposal-submission.zip"&gt;Homework #3 Capstone Proposal Submission zip file&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="brief-notes"&gt;Brief Notes &lt;a href="#brief-notes" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;See the zip for the the proposal text (&lt;code&gt;.md&lt;/code&gt;) and system context diagram&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="zach-ai-agent-feedback"&gt;Zach AI Agent feedback &lt;a href="#zach-ai-agent-feedback" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;The feedback is detailed and may be helpful in the future&lt;/p&gt;</description></item><item><title>Prompt Engineering</title><link>https://ai.danfanderson.com/docs/courses-workshops/datacamp-ai-eng-for-devs/prompt-engineering/</link><pubDate>Wed, 25 Feb 2026 18:50:03 -0600</pubDate><guid>https://ai.danfanderson.com/docs/courses-workshops/datacamp-ai-eng-for-devs/prompt-engineering/</guid><description>&lt;hr&gt;
&lt;small&gt;
 &lt;a href="https://github.com/danoand/danodocs-ai/blob/master/content/docs/courses-workshops/datacamp-ai-eng-for-devs/prompt-engineering.md" target="_blank" rel="noopener noreferrer"&gt;
 &lt;svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-github" viewBox="0 0 16 16"&gt;
 &lt;path d="M8 0a8 8 0 0 0-2.53 15.588c.4.074.55-.174.55-.386v-1.42c-2.24.486-2.71-1.07-2.71-1.07-.366-.93-.894-1.176-.894-1.176-.73-.5.056-.49.056-.49.807.056 1.303.83 1.303.83.716 1.23 1.88.875 2.34.669a1.67 1.67 0 0 1 .5-1c-2.22-.25-4.56-1.11-4.56-4.94a3.87 3.87 0 0 1 .97-2.68A3.6 3.6 0 0 1 .67 5s-.22-.7-.03-1c0 0 .84-.27 2.75 1a9.42 9.42 0 0 1 5 .001C13 .73 13 .27 13 .27c2 .73 2 .73 2 .73s-.22 .7-.03 .99a3.6 3.6 0 0 1 .97 2.68c0 3.83-2.35 4.68-4.58 4.93a1.67 1.67 0 0 1 .5 .99v1c0 .215 .15 .464 .55 .386A8 8 0 0 0 8 .001z"/&gt;
 &lt;/svg&gt;
 View this page on GitHub
 &lt;/a&gt;
&lt;/small&gt;
&lt;h2 id="notes"&gt;Notes &lt;a href="#notes" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;h4 id="principles-of-prompting"&gt;Principles of Prompting &lt;a href="#principles-of-prompting" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h4&gt;&lt;ul&gt;
&lt;li&gt;Use &lt;code&gt;Action&lt;/code&gt; verbs
&lt;ul&gt;
&lt;li&gt;Write&lt;/li&gt;
&lt;li&gt;Complete&lt;/li&gt;
&lt;li&gt;Explain&lt;/li&gt;
&lt;li&gt;Generate&lt;/li&gt;
&lt;li&gt;Describe&lt;/li&gt;
&lt;li&gt;Evaluate&lt;/li&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Formulate detailed instructions: &lt;em&gt;provide specific, descriptive, and detailed instructions on:&lt;/em&gt;
&lt;ul&gt;
&lt;li&gt;Context&lt;/li&gt;
&lt;li&gt;Output length&lt;/li&gt;
&lt;li&gt;Format &amp;amp; style&lt;/li&gt;
&lt;li&gt;Audience&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="basic-openai-api-code"&gt;Basic OpenAI API Code &lt;a href="#basic-openai-api-code" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;Stub code to generate a response from OpenAI&lt;/strong&gt;&lt;/p&gt;</description></item><item><title>Neural Networks</title><link>https://ai.danfanderson.com/docs/neural-networks/</link><pubDate>Sun, 23 Feb 2025 15:55:17 -0600</pubDate><guid>https://ai.danfanderson.com/docs/neural-networks/</guid><description>&lt;h3 id="perceptrons"&gt;Perceptrons &lt;a href="#perceptrons" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h3&gt;&lt;p&gt;A perceptron is a 1 neuron (node) model tha can be used to solve certain problems&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Basic examples are modeling: AND gate, OR gate&lt;/li&gt;
&lt;li&gt;Perceptrons work for types of problems in which linearly separable, that is there is linear expression that clearly separates outcomes.
&lt;ul&gt;
&lt;li&gt;in other words you can draw a line and the different types (e.g classes) of outcomes are separated&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;figure&gt;&lt;img src="https://ai.danfanderson.com/images/linearly_separable.png"&gt;
&lt;/figure&gt;

&lt;h4 id="intuition"&gt;Intuition &lt;a href="#intuition" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h4&gt;&lt;ul&gt;
&lt;li&gt;the point of the weighted neuron and bias sum calculation is to define the line that serves as the &lt;code&gt;decision boundary&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;all points on one side of the decision boundary are in one class (e.g. &lt;code&gt;1&lt;/code&gt;) and the others are in the other class (e.g. &lt;code&gt;1&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;an activation function (threshold function) calculates the output node (which should convert the calculation over or under the decision boundary)&lt;/li&gt;
&lt;li&gt;Process
&lt;ol&gt;
&lt;li&gt;configure an architecture (2-2-1) of:
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;2&lt;/code&gt; input neurons (x to be ANDed with y)&lt;/li&gt;
&lt;li&gt;a &amp;ldquo;hidden&amp;rdquo; layer of &lt;code&gt;2&lt;/code&gt; neurons that serve as the interim output of the weigted sum &amp;amp; bias calculation&lt;/li&gt;
&lt;li&gt;&lt;code&gt;1&lt;/code&gt; output neuron&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A perceptron computes a weighted sum of its inputs:&lt;/p&gt;</description></item><item><title>Advanced Prompt Engineering</title><link>https://ai.danfanderson.com/docs/courses-workshops/datacamp-ai-eng-for-devs/prompt-engineering-advanced/</link><pubDate>Mon, 09 Mar 2026 15:32:35 -0500</pubDate><guid>https://ai.danfanderson.com/docs/courses-workshops/datacamp-ai-eng-for-devs/prompt-engineering-advanced/</guid><description>&lt;hr&gt;
&lt;small&gt;
 &lt;a href="https://github.com/danoand/danodocs-ai/blob/master/content/docs/courses-workshops/datacamp-ai-eng-for-devs/prompt-engineering-advanced.md" target="_blank" rel="noopener noreferrer"&gt;
 &lt;svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-github" viewBox="0 0 16 16"&gt;
 &lt;path d="M8 0a8 8 0 0 0-2.53 15.588c.4.074.55-.174.55-.386v-1.42c-2.24.486-2.71-1.07-2.71-1.07-.366-.93-.894-1.176-.894-1.176-.73-.5.056-.49.056-.49.807.056 1.303.83 1.303.83.716 1.23 1.88.875 2.34.669a1.67 1.67 0 0 1 .5-1c-2.22-.25-4.56-1.11-4.56-4.94a3.87 3.87 0 0 1 .97-2.68A3.6 3.6 0 0 1 .67 5s-.22-.7-.03-1c0 0 .84-.27 2.75 1a9.42 9.42 0 0 1 5 .001C13 .73 13 .27 13 .27c2 .73 2 .73 2 .73s-.22 .7-.03 .99a3.6 3.6 0 0 1 .97 2.68c0 3.83-2.35 4.68-4.58 4.93a1.67 1.67 0 0 1 .5 .99v1c0 .215 .15 .464 .55 .386A8 8 0 0 0 8 .001z"/&gt;
 &lt;/svg&gt;
 View this page on GitHub
 &lt;/a&gt;
&lt;/small&gt;
&lt;h2 id="shot-prompting"&gt;&amp;ldquo;Shot&amp;rdquo; Prompting &lt;a href="#shot-prompting" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;Few Shot prompting&lt;/strong&gt;&lt;/p&gt;</description></item><item><title>Large Language Models</title><link>https://ai.danfanderson.com/docs/large-language-models/</link><pubDate>Wed, 19 Feb 2025 15:34:22 -0600</pubDate><guid>https://ai.danfanderson.com/docs/large-language-models/</guid><description>&lt;h2 id="resources-and-links"&gt;Resources and Links &lt;a href="#resources-and-links" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/playlist?list=PLWX9jswdDQ0VI1jXwgcH9tNNKc1lxdgOo" rel="external" target="_blank"&gt;Dan&amp;rsquo;s Lambda School DS Youtube Playlist&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;(probably dated but maybe good background on certain topics?)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/dwhitena/go-genai-webinar" rel="external" target="_blank"&gt;Daniel Whitenack&amp;rsquo;s (DW) go-genai-webinar repo&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;seminar presentation: &lt;a href="https://docs.google.com/presentation/d/1tVwxGYSUMp76l4gEi_1mLvoRnYylhGg5JqIbWLUef6s/edit?usp=sharing" rel="external" target="_blank"&gt;link&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Youtube seminar video (sponsored by Arden Labs): &lt;a href="https://www.youtube.com/live/ajfzcXUxgsE?si=Sb3wJ_X4sP_VF29a" rel="external" target="_blank"&gt;Youtube link&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.predictionguard.com/home/getting-started/welcome" rel="external" target="_blank"&gt;Prediction Guard Documentation&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt; - some open models available&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Cohere&lt;/strong&gt;: &lt;a href="https://cohere.com/" rel="external" target="_blank"&gt;https://cohere.com/&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt; | Cohere brings you cutting-edge multilingual models, advanced retrieval, and an AI workspace tailored for the modern enterprise — all within a single, secure platform&lt;/li&gt;
&lt;li&gt;DW suggested links:
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/" rel="external" target="_blank"&gt;Prompt Engineering from Lil&amp;rsquo;Log&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/dair-ai/Prompt-Engineering-Guide" rel="external" target="_blank"&gt;Prompt Engineering Guide from DAIR&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://huyenchip.com//2023/04/11/llm-engineering.html#prompt_evaluation" rel="external" target="_blank"&gt;Building LLM applications for production from Chip Huyen&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;For building personal / low expense outlay look at HuggingFace (ZeroGPU program?)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;DeepNote&lt;/strong&gt;: AI driven notebooks - &lt;a href="https://deepnote.com/" rel="external" target="_blank"&gt;https://deepnote.com/&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Data Science from Scratch code&lt;/strong&gt;: &lt;a href="https://github.com/joelgrus/data-science-from-scratch" rel="external" target="_blank"&gt;https://github.com/joelgrus/data-science-from-scratch&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="notes-on-large-language-models"&gt;Notes on Large Language Models &lt;a href="#notes-on-large-language-models" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;(Very) High Level Notes on LLM Execution
&lt;ul&gt;
&lt;li&gt;LLMs take a prompt and then calculate probabilities of words (tokens?) that should follow each other&lt;/li&gt;
&lt;li&gt;For prompt &lt;code&gt;Go is...&lt;/code&gt; the LLM may generate these words in descending order of probability:
&lt;ol&gt;
&lt;li&gt;&lt;code&gt;a&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;programming&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;language&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&amp;hellip;&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: this is very similar to &lt;em&gt;autocomplete&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;LLMs will calculate probabilities for every word (token?) it knows about (DA: scale seem massive)&lt;/li&gt;
&lt;li&gt;LLM &lt;code&gt;Temperature&lt;/code&gt; configuration setting: sounds like it drives some sort of variablity into the output so that the results are not always driven by strict probabilites (this would be boring or tend to lack creativity)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Enterprises are gradually migrating to use Open LLMs (e.g. Llama 3, deepseek, Mistral) from Closed LLMs (e.g. OpenAI [ChatGPT], Anthropic [Claude])&lt;/li&gt;
&lt;li&gt;Daniel Whitenack&amp;rsquo;s spectrum of AI complexity.
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Basic Prompting&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Prompt Engineering (CoT, templates, parameters)&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Augmentation, Retrieval&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Agents, Chaining&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Fine-tuning via a closed API&lt;/li&gt;
&lt;li&gt;Fine-tuning an open model&lt;/li&gt;
&lt;li&gt;Training a model from scratch&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Prompts
&lt;ul&gt;
&lt;li&gt;LLMs are tuned to use prompts that are formatted in specific ways (rather than a simple text question)
&lt;ul&gt;
&lt;li&gt;Check out: &lt;a href="https://docs.predictionguard.com/guides-and-concepts/using-ll-ms/prompt-engineering" rel="external" target="_blank"&gt;https://docs.predictionguard.com/guides-and-concepts/using-ll-ms/prompt-engineering&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Accuracy
&lt;ul&gt;
&lt;li&gt;Autocomplete LLMs focus on coherency so they answers may be coherent but not necessary accurate. PredictionGuard uses factual consistency checking models to confirm accuracy.
&lt;ul&gt;
&lt;li&gt;For example: &lt;code&gt;The White House is painted pink&lt;/code&gt;. (the sentence is coherent not accurate)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Training Models
&lt;ul&gt;
&lt;li&gt;DW: &amp;ldquo;&lt;em&gt;You should never, ever, ever have to train a model for the rest of your life&lt;/em&gt;&amp;rdquo;
&lt;ul&gt;
&lt;li&gt;You should&amp;hellip;
&lt;ul&gt;
&lt;li&gt;Use an open model and inject your data into the prompt&lt;/li&gt;
&lt;li&gt;At most, you may need to fine tune a model&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="llm-model-api-behavior"&gt;LLM Model API Behavior &lt;a href="#llm-model-api-behavior" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;Daniel Whitenack: most APIs including PredictionGuard (Daniel&amp;rsquo;s company) will start streaming completion text immediately and essentially stream/spit it out serially
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: In the PredictionGuard go code, they use a channel to receive that stream (and then your go program can start print it?)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="embeddings--vector-representations"&gt;Embeddings / Vector Representations &lt;a href="#embeddings--vector-representations" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;See &lt;a href="https://cohere.com/" rel="external" target="_blank"&gt;https://cohere.com/&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt; for a service to generate / capture embeddings&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="prompts"&gt;Prompts &lt;a href="#prompts" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;Prompt Formatting: various models are trained to handle prompts with specific text formatting. Structuring prompts in this way should optimize execution (?)&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id="chatml-format"&gt;ChatML Format &lt;a href="#chatml-format" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h4&gt;&lt;ul&gt;
&lt;li&gt;the actual text would go into the curly brace area&lt;/li&gt;
&lt;/ul&gt;



 
 
 

 
 
 
 

 

 &lt;div class="prism-codeblock "&gt;
 &lt;pre id="ae3b2a1" class="language- "&gt;
 &lt;code&gt;&amp;lt;|im_start|&amp;gt;system
{prompt}&amp;lt;|im_end|&amp;gt;
&amp;lt;|im_start|&amp;gt;user
{context or user message}&amp;lt;|im_end|&amp;gt;
&amp;lt;|im_start|&amp;gt;assistant&amp;lt;|im_end|&amp;gt;&lt;/code&gt;
 &lt;/pre&gt;
 &lt;/div&gt;
&lt;h2 id="large-language-vs-foundation-models"&gt;Large Language vs. Foundation Models &lt;a href="#large-language-vs-foundation-models" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;hr&gt;
&lt;p&gt;&lt;em&gt;From ChatGPT&lt;/em&gt; Summary of differences between Large Language and Foundation Models&lt;/p&gt;</description></item><item><title>Understanding Foundational Models</title><link>https://ai.danfanderson.com/docs/understanding-foundational-models/</link><pubDate>Mon, 24 Feb 2025 17:55:03 -0600</pubDate><guid>https://ai.danfanderson.com/docs/understanding-foundational-models/</guid><description>&lt;h2 id="training-data"&gt;Training Data &lt;a href="#training-data" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;an AI model is only as good as the data it was trained on&lt;/li&gt;
&lt;li&gt;common sources of general training data
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Common Crawl&lt;/strong&gt;: sporadic crawls of web data done by a nonprofit organization &lt;a href="https://commoncrawl.org/" rel="external" target="_blank"&gt;https://commoncrawl.org/&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;Google has a clean subset of this data called: &lt;em&gt;Colossal Clean Crawled Corpus&lt;/em&gt; (or &lt;code&gt;CA&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;general purpose foundation models will typically perform less well on domain specific tasks (due to less training data)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="modeling"&gt;Modeling &lt;a href="#modeling" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;hr&gt;
&lt;ul&gt;
&lt;li&gt;What should modelers consider?
&lt;ul&gt;
&lt;li&gt;Model architecture?&lt;/li&gt;
&lt;li&gt;Number of parameters?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id="model-architecture"&gt;Model Architecture &lt;a href="#model-architecture" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h4&gt;&lt;ul&gt;
&lt;li&gt;Transformer Architecture &lt;a href="https://ai.danfanderson.com/docs/glossary/#---t---"&gt;see glossary&lt;/a&gt; is currently the dominant archiecture for language based Foundation Models
&lt;ul&gt;
&lt;li&gt;training is based on the &lt;code&gt;attention mechanism&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;intended to solve some prevailing problems:
&lt;ol&gt;
&lt;li&gt;previous seq2swq architecture generated output only on the final hidden state of input (analop: like generating answers about a book by just reading it&amp;rsquo;s summary)&lt;/li&gt;
&lt;li&gt;using the RNN encoder and decoder meant that input processing and output generation is done sequentially - slow for inputs with lots of tokens&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Transformer Architecture Inference: leverages a parallel nature of execution
&lt;ol&gt;
&lt;li&gt;&lt;em&gt;Prefill step&lt;/em&gt;: model processes input tokens in parallel
&lt;ul&gt;
&lt;li&gt;this produces key and value vectors for all input tokens&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Decode step&lt;/em&gt;: decode generates one output token at a time&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;Attention Mechanism
&lt;ul&gt;
&lt;li&gt;use &lt;code&gt;key&lt;/code&gt;, &lt;code&gt;value&lt;/code&gt;, and &lt;code&gt;query&lt;/code&gt; vectors&lt;/li&gt;
&lt;li&gt;&lt;em&gt;From ChatGPT&lt;/em&gt;: the &lt;code&gt;key&lt;/code&gt; vectors are used for matching and weighting (determining &amp;ldquo;where to look&amp;rdquo;), while the &lt;code&gt;value&lt;/code&gt; vectors provide the substantive information (&amp;ldquo;what to show&amp;rdquo;) during the attention computation&lt;/li&gt;
&lt;li&gt;&lt;em&gt;From ChatGPT&lt;/em&gt;: &lt;code&gt;query&lt;/code&gt; vector is a high-dimensional representation derived from an input element (e.g., a token in a sentence). It represents the &amp;ldquo;question&amp;rdquo; that the model asks of other tokens in the sequence&lt;/li&gt;
&lt;li&gt;attention mechanism computes how much attention to give an input token by performing a dot product between a &lt;code&gt;query&lt;/code&gt; vector and its &lt;code&gt;key&lt;/code&gt; vector&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id="model-size"&gt;Model Size &lt;a href="#model-size" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h4&gt;&lt;ul&gt;
&lt;li&gt;Number of parameters is usually appended to the model name: e.g &lt;code&gt;Llama-13B&lt;/code&gt; ~ 13 billion parameters&lt;/li&gt;
&lt;li&gt;Generally more parameters means more capacity to learn
&lt;ul&gt;
&lt;li&gt;However newer models generally perform better even if they are smaller&lt;/li&gt;
&lt;li&gt;a parameter usually stores 2 bytes or 16 bits&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;a &lt;code&gt;sparse model&lt;/code&gt; has a lot of zero value parameters&lt;/li&gt;
&lt;li&gt;Training size
&lt;ul&gt;
&lt;li&gt;dataset sizes are measured by the number of training samples&lt;/li&gt;
&lt;li&gt;Language Models: a training sample can be a sentense, Wikipedia page, chat conversation, a book&lt;/li&gt;
&lt;li&gt;currently LLMs are trained on datasets representing trillions of tokens&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id="post-training"&gt;Post Training &lt;a href="#post-training" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h4&gt;&lt;ul&gt;
&lt;li&gt;Refine trained models: e.g. move from text completion to conversation&lt;/li&gt;
&lt;li&gt;Post training - high level steps
&lt;ol&gt;
&lt;li&gt;Supervised finetuning (SFT): fine tune model on high quality instruction data to optimize models for conversations instead of completion&lt;/li&gt;
&lt;li&gt;Preference fintuning: further finetune the model to output responses that align with human preference - typically using reinforcment learning&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;Pretraining focus on optimizing token-level quality. Post-training focuses on the quality of the overall response&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="infererence"&gt;Infererence &lt;a href="#infererence" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;hr&gt;
&lt;p&gt;When a production LLM receives a string prompt, it follows these high-level steps to generate an inference:&lt;/p&gt;</description></item><item><title>Agentic AI</title><link>https://ai.danfanderson.com/docs/agentic_ai/</link><pubDate>Mon, 31 Mar 2025 17:55:03 -0600</pubDate><guid>https://ai.danfanderson.com/docs/agentic_ai/</guid><description>&lt;p&gt;(from Yeh course unless specified otherwise)&lt;/p&gt;
&lt;h4 id="links--resources"&gt;Links &amp;amp; Resources &lt;a href="#links--resources" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h4&gt;&lt;ul&gt;
&lt;li&gt;Frameworks
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Auto Jam&lt;/em&gt; – Simplifies complex AI workflows using pre-built templates.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Meta GPT &amp;amp; Crew AI&lt;/em&gt; – Enable highly customized multi-agent simulations, mimicking human roles&lt;/li&gt;
&lt;li&gt;Wikipedia on &lt;a href="https://en.wikipedia.org/wiki/Multi-agent_system" rel="external" target="_blank"&gt;multi agent systems&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Apps / Tools
&lt;ul&gt;
&lt;li&gt;Manus: &lt;a href="https://manus.im" rel="external" target="_blank"&gt;https://manus.im&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;Mentioned in Yeh&amp;rsquo;s course&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Crew AI: &lt;a href="https://www.crewai.com" rel="external" target="_blank"&gt;https://www.crewai.com&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Agents &amp;amp; Payments
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://x402.org" rel="external" target="_blank"&gt;https://x402.org&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;ChatGPT suggested &lt;code&gt;x402&lt;/code&gt; learning project: &lt;a href="https://chatgpt.com/s/t_6973ee78d9b48191bebe1922844b5771" rel="external" target="_blank"&gt;https://chatgpt.com/s/t_6973ee78d9b48191bebe1922844b5771&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blockrun.ai" rel="external" target="_blank"&gt;https://blockrun.ai&lt;svg width="16" height="16" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"&gt;&lt;path fill="currentColor" d="M14 5c-.552 0-1-.448-1-1s.448-1 1-1h6c.552 0 1 .448 1 1v6c0 .552-.448 1-1 1s-1-.448-1-1v-3.586l-7.293 7.293c-.391.39-1.024.39-1.414 0-.391-.391-.391-1.024 0-1.414l7.293-7.293h-3.586zm-9 2c-.552 0-1 .448-1 1v11c0 .552.448 1 1 1h11c.552 0 1-.448 1-1v-4.563c0-.552.448-1 1-1s1 .448 1 1v4.563c0 1.657-1.343 3-3 3h-11c-1.657 0-3-1.343-3-3v-11c0-1.657 1.343-3 3-3h4.563c.552 0 1 .448 1 1s-.448 1-1 1h-4.563z"/&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id="background--intro"&gt;Background &amp;amp; Intro &lt;a href="#background--intro" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h4&gt;&lt;ul&gt;
&lt;li&gt;Four behaviors of Agent behavior (from Yeh course)
&lt;ol&gt;
&lt;li&gt;&lt;em&gt;Reflection&lt;/em&gt;: Before responding, the agent assesses whether it needs more information.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Tool Use&lt;/em&gt;: Agents access external resources, like checking live flight prices or retrieving updated data.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Planning&lt;/em&gt;: They break down complex tasks into step-by-step solutions.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Multi-Agent Coordination&lt;/em&gt;: Multiple agents work together like an efficient team, each handling different roles.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4 id="yehs-framework-or-equation"&gt;Yeh&amp;rsquo;s Framework (or &amp;ldquo;Equation&amp;rdquo;) &lt;a href="#yehs-framework-or-equation" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h4&gt;&lt;ul&gt;
&lt;li&gt;Yeh breaks down the Agent&amp;rsquo;s engagement and interaction into parts of a framework:
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;See&lt;/strong&gt;: assume this is the ability for the Agent to see (process?) an initial prompt entered by the user&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Think&lt;/strong&gt;: assume this is a directive to the agent to engage in a particular way:
&lt;ol&gt;
&lt;li&gt;&lt;em&gt;Role&lt;/em&gt;: adopt a role as the Agent (&amp;ldquo;act as a helpful real estate agent&amp;rdquo;)&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Plan&lt;/em&gt;: step through a series of process steps to ultimately achieve the goal&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Remember&lt;/strong&gt;: use data to aid in the process, seems to come in a couple flavors:
&lt;ul&gt;
&lt;li&gt;&lt;!-- raw HTML omitted --&gt;Historical Data&lt;!-- raw HTML omitted --&gt;: assume this is past interactions the Agent has had with this use (?)&lt;/li&gt;
&lt;li&gt;&lt;!-- raw HTML omitted --&gt;Contextual Data&lt;!-- raw HTML omitted --&gt;: assume this is application data (e.g. from a database or system) or third party data (e.g. stock prices)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Can&lt;/strong&gt;: assume this is to execute on particular tasks or invoke other entities to execute tasks on the Agent&amp;rsquo;s behalf (e.g. initial a call, book an appointment)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;That whole package above ⬆️ is considered a big prompt to be executed by an LLM&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;small&gt;
 &lt;a href="https://github.com/danoand/danodocs-ai/blob/master/content/docs/agentic_ai.md" target="_blank" rel="noopener noreferrer"&gt;
 &lt;svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-github" viewBox="0 0 16 16"&gt;
 &lt;path d="M8 0a8 8 0 0 0-2.53 15.588c.4.074.55-.174.55-.386v-1.42c-2.24.486-2.71-1.07-2.71-1.07-.366-.93-.894-1.176-.894-1.176-.73-.5.056-.49.056-.49.807.056 1.303.83 1.303.83.716 1.23 1.88.875 2.34.669a1.67 1.67 0 0 1 .5-1c-2.22-.25-4.56-1.11-4.56-4.94a3.87 3.87 0 0 1 .97-2.68A3.6 3.6 0 0 1 .67 5s-.22-.7-.03-1c0 0 .84-.27 2.75 1a9.42 9.42 0 0 1 5 .001C13 .73 13 .27 13 .27c2 .73 2 .73 2 .73s-.22 .7-.03 .99a3.6 3.6 0 0 1 .97 2.68c0 3.83-2.35 4.68-4.58 4.93a1.67 1.67 0 0 1 .5 .99v1c0 .215 .15 .464 .55 .386A8 8 0 0 0 8 .001z"/&gt;
 &lt;/svg&gt;
 View this page on GitHub
 &lt;/a&gt;
&lt;/small&gt;</description></item><item><title>Cook/Kitchen AI Model Analogy</title><link>https://ai.danfanderson.com/docs/neural-network-chef-analogy/</link><pubDate>Tue, 04 Nov 2025 17:55:03 -0600</pubDate><guid>https://ai.danfanderson.com/docs/neural-network-chef-analogy/</guid><description>&lt;hr&gt;
&lt;small&gt;
 &lt;a href="https://github.com/danoand/danodocs-ai/blob/master/content/docs/neural-network-chef-analogy.md" target="_blank" rel="noopener noreferrer"&gt;
 &lt;svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="currentColor" class="bi bi-github" viewBox="0 0 16 16"&gt;
 &lt;path d="M8 0a8 8 0 0 0-2.53 15.588c.4.074.55-.174.55-.386v-1.42c-2.24.486-2.71-1.07-2.71-1.07-.366-.93-.894-1.176-.894-1.176-.73-.5.056-.49.056-.49.807.056 1.303.83 1.303.83.716 1.23 1.88.875 2.34.669a1.67 1.67 0 0 1 .5-1c-2.22-.25-4.56-1.11-4.56-4.94a3.87 3.87 0 0 1 .97-2.68A3.6 3.6 0 0 1 .67 5s-.22-.7-.03-1c0 0 .84-.27 2.75 1a9.42 9.42 0 0 1 5 .001C13 .73 13 .27 13 .27c2 .73 2 .73 2 .73s-.22 .7-.03 .99a3.6 3.6 0 0 1 .97 2.68c0 3.83-2.35 4.68-4.58 4.93a1.67 1.67 0 0 1 .5 .99v1c0 .215 .15 .464 .55 .386A8 8 0 0 0 8 .001z"/&gt;
 &lt;/svg&gt;
 View this page on GitHub
 &lt;/a&gt;
&lt;/small&gt;
&lt;hr&gt;
&lt;h2 id="the-full-kitchen-story"&gt;The Full Kitchen Story &lt;a href="#the-full-kitchen-story" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h2&gt;&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;AI Concept&lt;/th&gt;
 &lt;th&gt;Kitchen Analogy&lt;/th&gt;
 &lt;th&gt;What’s Really Happening&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Model Architecture&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;&lt;strong&gt;The recipe book&lt;/strong&gt; (list of steps: whisk, bake, reduce…)&lt;/td&gt;
 &lt;td&gt;Fixed sequence of layers (Linear → ReLU → Attention…)&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Weights (θ)&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;&lt;strong&gt;Muscle memory &amp;amp; knob settings&lt;/strong&gt; on ovens, mixers, timers&lt;/td&gt;
 &lt;td&gt;Learned parameters that transform inputs&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Training Data&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;&lt;strong&gt;Thousands of ingredient bags&lt;/strong&gt; (flour, sugar, spices, labeled “good cake” or “burnt”)&lt;/td&gt;
 &lt;td&gt;Labeled examples $ (x, y) $&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Forward Pass&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;&lt;strong&gt;Following the recipe step-by-step&lt;/strong&gt; to produce a cake&lt;/td&gt;
 &lt;td&gt;$ ( \hat{y} = f(x; \theta) ) $&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Loss Function&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;&lt;strong&gt;Taste test by a picky judge&lt;/strong&gt; (score 1–10)&lt;/td&gt;
 &lt;td&gt;$ ( \mathcal{L}(\hat{y}, y) ) $&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Backpropagation&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;&lt;strong&gt;Judge writes notes on every step&lt;/strong&gt;: “Too much salt here → reduce shaker next time”&lt;/td&gt;
 &lt;td&gt;Chain-rule gradients $ ( \partial\mathcal{L}/\partial\theta ) $&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Optimizer (Adam, SGD)&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;&lt;strong&gt;Sous-chef who physically adjusts every knob&lt;/strong&gt; based on judge’s notes&lt;/td&gt;
 &lt;td&gt;$ ( \theta \leftarrow \theta - \eta \cdot g ) $&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Epoch&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;&lt;strong&gt;One full day of baking dozens of cakes&lt;/strong&gt;, tasting, adjusting, repeat&lt;/td&gt;
 &lt;td&gt;Full pass over dataset&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Validation Set&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;&lt;strong&gt;Separate table of guest tasters&lt;/strong&gt; who never give adjustment notes&lt;/td&gt;
 &lt;td&gt;Monitor generalization&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Overfitting&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;&lt;strong&gt;Chef memorizes &lt;em&gt;exactly&lt;/em&gt; how the training cakes tasted&lt;/strong&gt;, fails on new guests&lt;/td&gt;
 &lt;td&gt;High training acc, low val acc&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Regularization (Dropout, Weight Decay)&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;&lt;strong&gt;Randomly turning off one burner&lt;/strong&gt; or &lt;strong&gt;fining the chef for using too much butter&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;Prevent over-reliance on any step&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Learning Rate&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;&lt;strong&gt;How boldly the sous-chef turns the knobs&lt;/strong&gt; — too big → overshoot, too small → stuck&lt;/td&gt;
 &lt;td&gt;Step size η&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Gradient Clipping&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;&lt;strong&gt;Putting a safety cap on the gas valve&lt;/strong&gt; so it can’t explode&lt;/td&gt;
 &lt;td&gt;Prevent exploding gradients&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;hr&gt;
&lt;h3 id="training-phase--apprenticeship-in-the-kitchen"&gt;Training Phase = &lt;strong&gt;Apprenticeship in the Kitchen&lt;/strong&gt; &lt;a href="#training-phase--apprenticeship-in-the-kitchen" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h3&gt;&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Day 1&lt;/strong&gt;: Open recipe book, set all oven dials randomly.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Bake a cake&lt;/strong&gt; (forward pass).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Judge tastes → “6/10, too dry, not sweet enough”&lt;/strong&gt; (loss).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Judge circles mistakes on every step&lt;/strong&gt; → backprop.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Sous-chef tweaks every dial a tiny bit&lt;/strong&gt; → optimizer step.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Repeat for 100 cakes (one epoch)&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;End of week&lt;/strong&gt;: Chef now bakes &lt;em&gt;training cakes&lt;/em&gt; at 9.8/10.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Guest table (validation)&lt;/strong&gt; still says 7/10 → &lt;em&gt;overfitting&lt;/em&gt;.&lt;/li&gt;
&lt;/ol&gt;
&lt;hr&gt;
&lt;h3 id="inference-phase--restaurant-service"&gt;Inference Phase = &lt;strong&gt;Restaurant Service&lt;/strong&gt; &lt;a href="#inference-phase--restaurant-service" class="anchor" aria-hidden="true"&gt;&lt;i class="material-icons align-middle"&gt;link&lt;/i&gt;&lt;/a&gt;&lt;/h3&gt;&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;AI Step&lt;/th&gt;
 &lt;th&gt;Kitchen Action&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Load model&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;Open restaurant, &lt;strong&gt;recipe book is now laminated&lt;/strong&gt; — no edits!&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Preprocess input&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;Customer orders “chocolate cake” → &lt;strong&gt;measure exact 200 g flour, 150 g sugar&lt;/strong&gt; (tokenize, normalize)&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Forward pass&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;&lt;strong&gt;Follow recipe &lt;em&gt;exactly&lt;/em&gt;&lt;/strong&gt; → mix, bake, cool&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Post-process&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;&lt;strong&gt;Dust with powdered sugar, plate nicely&lt;/strong&gt; (softmax → argmax, detokenize)&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Return result&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;&lt;strong&gt;Serve cake in &amp;lt; 2 minutes&lt;/strong&gt; — customer happy&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;No judge. No notes. No knob-twiddling.&lt;/strong&gt;&lt;br&gt;
Just &lt;strong&gt;perfect, repeatable execution&lt;/strong&gt;.&lt;/p&gt;</description></item></channel></rss>