MOST Important AGENTIC Application - Speech to Text to AI Agents (TTS, STT, LLM Router)
IndyDevDan IndyDevDan
12.8K subscribers
7,548 views
0

 Published On Apr 15, 2024

What comes next after AI Agents? What's the most useful workflow for your agents?

The answer is pretty clear, the best way to use your growing collection of AI Agents is in the form of a personal AI assistant.

Not just 'a' personal AI assistant, YOUR personal AI assistant.

Imagine a tool so powerful, it feels like an extension of your mind. In this video, we dive into the creation of the most important agentic application we can build and use: Your Personal AI assistant. This tool will be limited only by your imagination, and your ability to hop in your python or typescript code and COOK up great agentic workflows, AI Agents, prompt chains, and individual prompts. Your personal assistant can code for you, research for you, and organizing your digital life. BUT in order to get to that vision we have to take small, incremental steps. Here we look an EARLY prototype of what future personal AI assistants (the next level of VAs) will look like through ADA. ADA is the name of my, personal AI assistant. It's a prototype to show what this technology will be able to do for you.

In order to make use of your AI Agents and prompt chains in the form of your personal AI assistant, we need a framework for prompting your agents. In this video we introduce two critical frameworks for building your personal AI assistant: the PAR framework, and the simple keyword AI Agent Router (LLM Router). The PAR framework sets an a clean loop for you and your personal AI assistant. First you speak in natural language, we run text-to-speech (TTS) to capture your prompt and convert it into text which becomes your nlp/prompt (natural language prompt).

Next we use an LLM Router called the simple keyword AI Agent Router which takes your prompt and decides which AI Agents to run. Your agents run their individual, isolated workflows, and finally your personal ai assistant (ai va) responds to you using speech-to-text (STT) completed the PAR framework.

The beauty of this framework is that it doesn't make any assumptions about your prompts, prompt chains, or agents, all of that runs from the llm router based on your activation keywords based on your prompt. You can run langchain, crewai, autogen, or any other agent framework to build and run your agentic workflows. In future videos we'll be utilizing the AgentOS micro architecture to build reusable, composable AI Agents. Our LLM Router will then route to our individual agents to run dedicated functionality.

This is just the beginning of the most important agentic application we can build and use: Your Personal AI assistant.

Stay focused, and keep building.


đź“ť Personal AI Assistant Gist (Draft/Read Only)
https://gist.github.com/disler/2840f0...

🤖 AgentOS - Build reusable, composable agents
   • Agent OS: LLM OS Micro Architecture f...  

🔍 7 Prompt Chains For Better AI Agents
   • 7 Prompt Chains for Decision Making, ...  

đź“– Chapters
00:00 The Agentic Pieces are LINING up
00:33 Your Personal AI Assistant
00:55 ADA Demo, Proof of Concept
03:05 Big Ideas, PAR Framework, LLM Router, Flaws
03:50 Prompt, Agent, Response
06:25 AI Agent LLM Router
11:50 Future of Personal AI Assistants
13:15 Everything we do is STACKING up
14:07 Improvements, Flaws, vNext
17:05 More AI Agents, More Prompt Chains

#aiassistant #aiagents #virtualassistant

show more

Share/Embed