1
The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.
A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!
Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.
Working with AI technologies like Large-language models (LLMs) brings its own set of challenges.
Chief among them is deciding whether to build or buy.
Popular LLMs like OpenAI’s GPT, Google’s Gemini, and Anthropic’s Claude are an incredible way to quickly get started with your AI solution. But their lock-in poses a long-term risk that many companies don’t consider.
On the other hand, developing your own model in-house can be tempting, especially if you have the hardware. But that creates a slew of potential headaches down the road; headaches that are hard to buy your way out of.
Let’s go over what you should ask yourself when considering building your own LLM solution, or working with off-the-rack vendors like OpenAI.
Let’s take a step back.
We see so many companies that think they need a fancy (expensive) AI solution.
Think about this before you decide to commit, because it can be an expensive mistake.
Before you go all-in on a grand plan, think about the smallest, cheapest possible way to validate that AI and LLMs are the solution you should use.
After that, you can start to consider these questions.
What’s the current state of the team, and what can you afford to hire for?
This journey might take more than a year. Are you prepared to build out a full team of AI researchers to support a self-hosted model?
There isn’t a wrong answer, but maintenance is an underrated part of this journey. If something breaks, if models drift, if you want to further customize your solution…you’ll need experts. And thanks to the AI boom, experts are more expensive than ever.
This is typically worth it at large-scale, when outsize returns can make a project return its investment. At a small scale, it’s rarely feasible. Developing a self-hosted LLM takes years of expensive R&D.
Time-to-value is an important metric for these projects. And self-hosting is usually longer than third-party solutions.
The reason why is simple: hosting, fine-tuning, developing the base model…these are all things the third-party LLM provider has done, and done well. You’ll have to do at least a few of those steps yourself.
Plus, how long can your users wait for something perfect? If they’re okay with a simple disclaimer that an AI tool makes mistakes, then you can release something quicker. But if accuracy is paramount, especially to your use-case, take that into consideration.
That adds on development time, which again, adds up cost.
If you decide to build self-hosted, remember that you’re making a big bet. You’re betting that a small part of your company will beat the performance of the smartest AI researchers on the planet. AI researchers at billion dollar companies that spend their whole day trying to make their model better. OpenAI gets to spend months optimizing to make a model cheaper.
Now, to be fair, they’re making a general intelligence model better. They aren’t spending all day trying to make the model better for your specific use-case.
But if your use case is code generation, for example, it’s going to be incredibly difficult to match up against Claude or GPT. They hire thousands of developers to improve their models - that’s what billions of dollars of R&D gets you.
“You really need to decide if a third-party LLM will enable your product to improve, or become a competitor. If you need an LLM for code review and generation, it is almost impossible to do better than GPT. And a solution that’s cheaper than GPT 4 now might be more expensive than GPT 5.” says Vitalijus Cernej, an ML Engineer at NineTwoThree.
So, if you’re in an industry that just needs general reasoning…do you really need a self-hosted expensive LLM? Or can you hitch your wagon to an LLM that will only ever get better, without your assistance?
If your data is extremely proprietary and specific, then you might be better suited building your own solution.
Cost is the elephant in the room for AI products. Consumer products especially can get expensive, fast.
Understanding the economics of your solution will help justify your decision. For example, if you charge $10 a month for a consumer product, and want to add an AI feature with $25 of OpenAI calls per user, per month…the math doesn’t work out.
Spending $10k self-hosting sounds bad, but compared to $100k of API calls, it starts to look a lot better.
That brings us to accuracy.
Let’s be honest - as of September 2024 (when this article was written) there isn’t an open-source model that touches GPT in general accuracy.
But sheer model accuracy isn’t the only way to guarantee a performant system.
There are many systems design and architecture optimizations you can make to improve accuracy.
These techniques can improve the performance of an open-source model considerably.
So, if you can wait to improve accuracy, you don’t have to go with the latest and greatest from OpenAI.
“Typically, we see customers start building with a third-party solution. When they see the right metrics - customers, engagement, revenue - then they start to consider self-hosting. Being able to compare the cost is critical in making the right decision.” says Vitalijus.
Not all architectures are created equal.
Plugging into OpenAI sounds nice, but if you run an IoT device company with computing at the edge, you can’t always rely on a stable connection. Sending information from an edge device to OpenAI and back takes time.
That’s why many IoT AI solutions run a small model on-device. It might not be as powerful as a few $50k GPUs, but it’s much more reliable.
If you’re in a security-focused domain dealing with sensitive data…you might not have a choice.
Sending data to an external company, no matter how secure it is, might not be an option. And some vendors might not work with you if you can’t guarantee their data is staying behind your firewall.
The good news? It’s getting much easier to run these models with no internet access.
“We had a case not long ago where a client asked if we could run the model locally on a plane. That’s an edge case - with no internet, and a very old machine. So at that point, the answer was no. Now, thanks to advances, I think we could actually do it.” says Jurgis Samatis, ML Ops Engineer at NineTwoThree.
It’s entirely possible to host this behind a corporate firewall - we go over that here.
Long-term stability should be a factor if you’re trying to be in this for the long haul.
(As you should be)
Models do retire, change, and are ultimately out of your control. Companies like Terraform went from open to closed-source because they’re under pressure to deliver shareholder value. Vendors like Amazon can and will deprecate solutions.
They might give you a few years' notice, but it’s always something to look out for, and creates a deadline as soon as it's announced.
“Big deal. I’ll just switch over to the new model!”
Not so fast. While this is possible, it’s not as straightforward as an API change.
“We’ve seen cases where switching a model but keeping the prompt can break performance. Even switching model versions - GPT 3.5 to 4, for example - can have a big impact. If you can’t control that, and you build a system that’s sensitive to those changes, you’ll get rug pulled one day.” says Jurgis.
LLM output is nondeterministic, and that’s why we spend months fine-tuning performance and prompts. A new model might mean having to do that all over again…and this time, it might be while the app is in production, with (angry) paying customers.
If you have a niche company, customer, and prompt, you’ll be in trouble. New models might be better performance-wise, and against some benchmarks, but they might be worse for your niche.
In case it isn’t abundantly clear, it’s very difficult to create a self-hosted solution at enterprise scale.
Especially because you’re probably operating nimbly. Starting with OpenAI or an equivalent model to prove out a concept and get a quick MVP. After that, once you have customers and things are scaling, you can start to consider taking that momentum and applying it to a self-hosted solution.
However, if you’re dealing with sensitive data, or edge-case solutions…self-hosting might be the way to go.
Ultimately, you need to make this decision with experts.