š Hello, my name is Andrew Amann and am one of the founders for Caveminds.com. I am a founder and CEO of a Product Engineering and Design Studio called NineTwoThree where we build web, mobile and AI apps. Our team builds ML / AI models for enterprise companies and through osmosis and conversations, understands the applications that are going to make a larger difference than ChatGPT.Ā
LLMs are here, and while all of you on Twitter are watching people build tools on top of ChatGPT and bragging about quick š°, be glad that you are here to learn about what the big boys and girls are doing with AI.
While we all gloat about the simplicity to log in to our favorite LLM and ask questions until our ā„ļø's content, not everyone is excited about the ease of use.
Amazon has restricted all of its employees from using Bard, ChatGPT, or any other LLM. Why? Security. Is this a mistake? No. It's going to become the norm.
Because what many do not realize is that when www was created, the same security problems existed. Well, it was simpler back then - there were very few computers š¾ connected to the internet - but very quickly firewalls were built. And the enterprise companies were protected š.
LLM's pose the same risk. Instead of being restricted on which www you can search inside your company's firewalls, Amazon is restricting which LLMs you can use. Because anything you type into the LLM is always and forever public.
Even Law Offices are restricting the use of LLMās because if the Lawyer types in confidential information to formulate the agreement; well, that is a violation of their attorney-client confidentiality pledge. š¤
So don't go thinking lawyers will be extinct soon - the only thing that will happen is their prices will decrease as software agencies build out private LLM's for their offices. (More on this soon.)
But what about all the tools on Twitter that are "making life easier" for lawyers, accountants, tax advisors, content writers, filmmakers, and every person's job ever created? Well, those will all become use cases for agencies and internal enterprise teams of how to build secure tools for real-world use cases.
I am not saying that all tools are junk. But the Lindy effect proves that if you build something in a month, it will die in a month. Things need to be built to last for customers that plan to use your product for years - not microseconds.
So if ānew toolsā will die, how will AI advance? šŖ¦Bard is now released publicly to the world and destroys any tool that was created on ChatGPT promoting a āfine tunedā data set. Iām looking at those Jasper style āmake me something in my brand voiceā tools by uploading all my website content then hitting āGive me a hero imageā.Double š„±š„±
But something is happening that no one saw coming. It is larger than ChatGPT; it is not human-created, and it is shocking data scientists. It also, in theory, should replace most tools you see being built on top of ChatGPT because, well, it's emergent.
Itās called āIn Context Learningā and Enterprise clients are drooling over its capabilities.Ā
In-context learning is a type of machine learning where an AI model learns to perform a task by observing examples of the task being performed in context. This is in contrast to traditional machine learning, where an AI model is trained on a dataset of labeled examples of the task being performed.
Kind of like asking an intern to label all the movies in Netflix šæby genre. š„±
Previously, you would need to hire a team of laborers to go into Netflix and label movie after movie after movie after movie of alllll the genres that exist. What even is Neo Noir anyway? Then, after it's properly labeled, finally a data scientist can come in and create predictions on the dataset so that after you're done watching Ā ā10 Things I hate about youā you can roll right into āNever Been Kissed.ā Yes, 90s movies are the best.Ā
With in-context learning, you can instead train your AI model by observing examples of how humans are labeling those movies based on the context of the movie itself.Ā Maybe, the machine notices that if the word āLoveā appears more than 4 times then itās a Romantic Movie. But it is not doing this in post-production. The machine is inferring the data from only a few labels that the human provided. Then predicting the rest of the labeling by some other content it discovers about the movie itself - sometimes even things humans donāt notice.Ā
Maybe there are millions of movies and after 20 labels, the machine can start self labeling the genres. That is a ridiculous time saving tasks to prepare information for prediction models.Ā
This. Is. Mind-blowing. š¤Æ
If you run a small business, SaaS product or even brick and mortar store š§± then there are still ways you should be educating yourself to prepare your biz. Because if you donāt your competitors willā¦.
Improve customer service:
In-context learning can be used to train AI models to answer customer questions in a more accurate and efficient way. This can free up human customer service representatives to focus on more complex tasks.
Personalize marketing campaigns:Ā
In-context learning can be used to track customer behavior and preferences to personalize marketing campaigns. This can help businesses to target their marketing efforts more effectively and improve their return on investment (ROI).
Automate tasks:
In-context learning can be used to automate tasks that are currently performed by humans. This can free up human employees to focus on more strategic and value-added activities.
Make better decisions:Ā
In-context learning can be used to analyze data to identify trends and patterns. This information can be used to make better decisions about business operations.
Here are specific examples of how a biz can use itā¦
Here are the steps on how to set up an on-premise LLM with in-context learning:
LLMās are going to go private. Secure models will be built inside of network firewalls and provide executive assistant level help to all employees inside a company.Ā
Workflows will be created, classifications will exist and prompting will be rampant. But all inside a company firewall. And all built internally for that specific company.Ā
You see, it's important - ahem, vital -Ā that none of this information leaks. How a company āpromptsā will be IP. How a company obtains data for itās LLM will be IP. How a company filters data for their clients and customers will be IP.Ā Ā
So Lawyers will definitely be needed to protect this new paradigm. Told you we would come back to talking about Lawyers.Ā
But what do you need to know about your multi-thousandaire company (or millionaire for those lucky few) ?
You need to know that big products are being built to āsuck upā all the ChatGPT tools that you see in the wild today. These big products are coming from companies such as Facebook, HuggingFace, Microsoft, and Google and they will be deployed inside of their ecosystem.Ā
For example, all of your āMidJourneyā apps will be replaced by Adobe as it trains its own LLM to produce Photoshop style image editing. Their LLM will self learn from all of its users what types of editing the images require and apply those to all the future customers asking for the same edits.Ā
Google will deploy search capabilities into Google Docs so that you can quickly create workflows for HR, Legal, Engineering etc - and categorize documents accordingly. Since no enterprise uses Google Docs - Dropbox will start deploying their own version of an LLM to their enterprise customers that will ensure security and scale. This will allow people to work INSIDE of a safe environment while they prompt and pitter patter on their keyboards.Ā
As for Amazon? Well - they already have many many many internal LLMs doing specific tasks. They have for years - itās nothing new for them to figure out how to keep data secure. What is new, is that their LLMs can get smarter through in context learning based on the new discoveries in AI.Ā
So keep playing with your SelfGPT apps and photoGPT tools. I am not sure about the future of Copy.AI - itās hard to see a use case for it with Bard now. But then again, people still use Whereby even though Zoom, Google, Teams, and whatever Meta calls their video conferencing tool exists.Ā
Surely some will last. But most will die as enterprise gets warmed up with their new found abilities to perform in context learning on their large datasets.Ā
As for agencies? Start reading CoHere blogs and learn how Pinecone works - because that is the future of development imo.Ā
Subscribe to the Caveminds AI newsletter here.