1
The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.
A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!
Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.
“We need to have an AI strategy” is something you’ve either heard from your boss, or told your direct reports. And it’s true - data science and AI are the enterprise buzzwords of the last half-decade.
It’s only gotten more critical over the past few years, as technologies like LLMs made it much, much easier to go from idea to product.
There isn’t just a new wave of billion-dollar companies that didn’t exist 2 years ago…there’s thousands of AI tools that enterprises are spinning up for their employees. Tools that save them time every day, and pay off their investment within months.
So…why isn’t everyone succeeding? If it’s so easy, where are all the huge wins?
Well, like most enterprise projects, AI projects die on the vine.
All realistic scenarios we’ve seen and helped with.
We think we’re uniquely qualified to talk about this, because all of our projects reach production. And we cap downside by testing a small version early, not 6 months and $500,000 into a project.
We don’t want you to make the same mistakes others have made. Let us explain what you can do to set your AI project up for success.
If I had to boil this down to one common trait, one single reason that AI projects fail, it’s pretty simple:
No one treats them like actual engineer projects.
Think about it. With engineering projects (the successful ones, at least) you follow some path of:
When you spell it out, it makes total sense.
Define success metrics early to avoid the common pitfall of starting strong but failing when it's time to test and agree on what "done" looks like.
We’ve lost count of the companies that start out strong, then completely fail when it’s time to test.
They do everything right - excellent data quality, specific problem, well-trained model. Then, once it’s time to do user testing, they can’t agree on what “done” looks like.
Agree on success metrics, or at least what you’re hoping to measure. Some common examples:
Do this early, to avoid pain later.
GPT-4o mini instead of o1.
10 API calls instead of 10,000.
One small workflow instead of an entire production app.
Are you getting it?
We want to validate at $10,000 before we deploy a system that costs $1,000,000.
Think of a small workflow, with limited quality data, that you can test with a cheap LLM.
If it shows promise, then we can talk about scaling up. But only then.
It’s not just about training a really great LLM. If you come from the software world, you might think that things stay more-or-less the same post-deployment.
But in AI, we need to update the model frequently to prevent model drift and decreased performance.
This is so important, it can turn any successful launch into a bad product within months.
Make sure to account for this when you’re planning - it’s non-trivial and non-negotiable.
Building on the latest and greatest from OpenAI sounds great until it’s time to pay the bill.
And yet, no one wants to shell out $50mm for fancy data centers without knowing it’s the right move. As always, there’s a middle ground.
Decide which third-party vendors you want to rely on, and which services you can either use in-house, or build off your existing infrastructure.
Just please, don’t jump to the most expensive model. As we talked about above, it’s not necessary at the start. And for some use cases, it’s never necessary at all!
I know we said that these projects differ from your typical engineering product, but that doesn’t mean you shouldn’t borrow best practices.
Delegation of duties will serve you well when investing in these projects, and scaling them up.
Data scientists shouldn’t have to design complicated UIs and testing frameworks. Sure, if they want to talk to stakeholders to get a better understanding of the features, it makes sense. But it should be a nice-to-have, not a requirement.
Product managers shouldn’t be fine-tuning models.
ML Engineers shouldn’t be digging around in messy datasets.
Divide your duties carefully.
Measure your progress, keep track of it somewhere, and make sure you set realistic goals.
This is where you can lean on the talent you’re working with - they’ll help you understand what’s a great goal for a 2-week sprint, and what’s a 3-month epic.
Just make sure you’re tracking everything - especially costs.
Too many teams overlook this step, and it costs them dearly.
It’s critical you do this, because this is when you usually find out that everyone has a wildly different definition of what “correct”, “good enough”, and “shippable” really is.
The legal team might expect 100% accuracy with zero hallucinations, whereas 70% might be considered by industry standards.
Agree on these metrics.
NineTwoThree used the on-device ML capabilities of the iPhone to transform SWEE’s golf training experience.
NineTwoThree created and scaled an entire AI division for DataFlik, and helped them become a huge success story in the Real Estate AI space.
NineTwoThree was selected by the CR Innovation Lab to help build an experimental chatbot that combines the power of AI with CR's expertise to answer your questions and offer product recommendations.
NineTwoThree helped design and implement the system alongside CR’s engineering and product team.
NineTwoThree worked with renowned home security company SimpliSafe. We used AI vision to stop burglars before they strike.
NineTwoThree worked with Protect Line to introduce a revolutionary AI chatbot to enhance customer experience (and convert sales better.)
As you venture into this space, there are a few critical missteps to watch out for. Here’s what to keep in mind to avoid common pitfalls:
I know social media makes it seem like there’s nothing standing in your way. And while there’s less than you’d expect, your expectations should be clear. This is hard stuff, and you’re working at the bleeding edge.
Be prepared for a journey that will pay dividends. But a journey nonetheless.
“Let’s add an AI chatbot” is one thing, but deciding on what that actually means is another.
Did you see a flashy demo and get inspired? Clearly link it to your product, its goals, and how this makes them more achievable. Not just to impress customers, or shareholders.
Soon, asking someone what their “AI strategy” is will sound like asking Apple what their “technology strategy” is. It’ll be a foregone conclusion that every company has one.
Don’t be left behind!
We’ve put a lot of thought into AI. We’ve got more than a decade of experience working with some of the biggest corporations on the planet, and we’d love to help you on your AI journey.