Heading

1

What’s a Rich Text element?

The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.

Static and dynamic content editing

A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!

How to customize formatting for each rich text

Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.

Resource

Investing in Machine Learning with Confidence: Steps to Secure ROI

Investing in Machine Learning with Confidence: Steps to Secure ROI
No items found.
AI works, but only if you approach it the right way. Before starting your AI project, let's take a moment to learn from others' mistakes. From defining success metrics to building with the right tools, we've got the insider tips to help you avoid common pitfalls and set your AI strategy up for real success.
Download it now.

Download "Investing in Machine Learning with Confidence: Steps to Secure ROI" to Learn How to

  • Avoid common pitfalls in AI projects
  • Define success, test small, and plan ahead
  • Make smart build vs. buy decisions
  • Apply engineering principles: agile, testing, clear roles
  • Learn from success stories

AI Strategies: Why Some Soar and Others Sink

“We need to have an AI strategy” is something you’ve either heard from your boss, or told your direct reports. And it’s true - data science and AI are the enterprise buzzwords of the last half-decade. 

It’s only gotten more critical over the past few years, as technologies like LLMs made it much, much easier to go from idea to product. 

There isn’t just a new wave of billion-dollar companies that didn’t exist 2 years ago…there’s thousands of AI tools that enterprises are spinning up for their employees. Tools that save them time every day, and pay off their investment within months.

So…why isn’t everyone succeeding? If it’s so easy, where are all the huge wins?

Well, like most enterprise projects, AI projects die on the vine. 

  • Maybe no one calculated how much the OpenAI bill would actually be. 
  • Maybe it turned out the data was not nearly quality enough to support a production product. 
  • Maybe no one defined success metrics, so it was impossible to tell if the model was effective.
  • And finally, worst of all…maybe the right experts weren’t there to take it across the finish line. 

All realistic scenarios we’ve seen and helped with. 

We think we’re uniquely qualified to talk about this, because all of our projects reach production. And we cap downside by testing a small version early, not 6 months and $500,000 into a project.

We don’t want you to make the same mistakes others have made. Let us explain what you can do to set your AI project up for success.

Why Most AI Projects Fail

If I had to boil this down to one common trait, one single reason that AI projects fail, it’s pretty simple:

No one treats them like actual engineer projects. 

Think about it. With engineering projects (the successful ones, at least) you follow some path of:

  • Low-stakes proof of concept
  • Defining success metrics and criteria
  • Allocating and estimating cloud spend and budget
  • Thoroughly evaluating technical capabilities before investing significant effort
  • Making sure the right talent is working on the project

When you spell it out, it makes total sense. 

Key Strategies for AI Success

Define success metrics early to avoid the common pitfall of starting strong but failing when it's time to test and agree on what "done" looks like.

Define Success Metrics Early

We’ve lost count of the companies that start out strong, then completely fail when it’s time to test.

They do everything right - excellent data quality, specific problem, well-trained model. Then, once it’s time to do user testing, they can’t agree on what “done” looks like.

  • Is production-ready 90% accuracy? 
  • Are we okay with 70% for specific use cases? 
  • What about latency - is a customer going to tolerate an 8-second wait time?

Agree on success metrics, or at least what you’re hoping to measure. Some common examples:

  • Accuracy
  • Latency
  • Cost per interaction / chat / outcome / etc

Do this early, to avoid pain later.

Test the Smallest Possible Version

GPT-4o mini instead of o1.

10 API calls instead of 10,000.

One small workflow instead of an entire production app.

Are you getting it?

We want to validate at $10,000 before we deploy a system that costs $1,000,000.

Think of a small workflow, with limited quality data, that you can test with a cheap LLM.

If it shows promise, then we can talk about scaling up. But only then.

Think About Upkeep

It’s not just about training a really great LLM. If you come from the software world, you might think that things stay more-or-less the same post-deployment. 

But in AI, we need to update the model frequently to prevent model drift and decreased performance. 

This is so important, it can turn any successful launch into a bad product within months. 

Make sure to account for this when you’re planning - it’s non-trivial and non-negotiable.

Decide Where to Build and Where to Buy

Building on the latest and greatest from OpenAI sounds great until it’s time to pay the bill. 

And yet, no one wants to shell out $50mm for fancy data centers without knowing it’s the right move. As always, there’s a middle ground.

Decide which third-party vendors you want to rely on, and which services you can either use in-house, or build off your existing infrastructure. 

Just please, don’t jump to the most expensive model. As we talked about above, it’s not necessary at the start. And for some use cases, it’s never necessary at all!

Borrowing From Winning Engineering Principles

I know we said that these projects differ from your typical engineering product, but that doesn’t mean you shouldn’t borrow best practices.

Ownership

Delegation of duties will serve you well when investing in these projects, and scaling them up.

Data scientists shouldn’t have to design complicated UIs and testing frameworks. Sure, if they want to talk to stakeholders to get a better understanding of the features, it makes sense. But it should be a nice-to-have, not a requirement.

Product managers shouldn’t be fine-tuning models.

ML Engineers shouldn’t be digging around in messy datasets.

Divide your duties carefully. 

Agile

Measure your progress, keep track of it somewhere, and make sure you set realistic goals.

This is where you can lean on the talent you’re working with - they’ll help you understand what’s a great goal for a 2-week sprint, and what’s a 3-month epic.

Just make sure you’re tracking everything - especially costs.

Robust Testing

Too many teams overlook this step, and it costs them dearly.

AI Robust Testing

It’s critical you do this, because this is when you usually find out that everyone has a wildly different definition of what “correct”, “good enough”, and “shippable” really is.

The legal team might expect 100% accuracy with zero hallucinations, whereas 70% might be considered by industry standards.

Agree on these metrics.  

Examples of Winning Projects

SWEE

SWEE

View Portfolio

NineTwoThree used the on-device ML capabilities of the iPhone to transform SWEE’s golf training experience.

DataFlik

DataFlik

View Portfolio

NineTwoThree created and scaled an entire AI division for DataFlik, and helped them become a huge success story in the Real Estate AI space.

Consumer Reports

Consumer Reports

View Portfolio

NineTwoThree was selected by the CR Innovation Lab to help build an experimental chatbot that combines the power of AI with CR's expertise to answer your questions and offer product recommendations. 

NineTwoThree helped design and implement the system alongside CR’s engineering and product team.

SimpliSafe

SimpliSafe

View Portfolio

NineTwoThree worked with renowned home security company SimpliSafe. We used AI vision to stop burglars before they strike.

Protect Line

Protect Line

View Portfolio

NineTwoThree worked with Protect Line to introduce a revolutionary AI chatbot to enhance customer experience (and convert sales better.)

Avoid These Common Mistakes

As you venture into this space, there are a few critical missteps to watch out for. Here’s what to keep in mind to avoid common pitfalls:

1. Expecting All This to Be Easy

I know social media makes it seem like there’s nothing standing in your way. And while there’s less than you’d expect, your expectations should be clear. This is hard stuff, and you’re working at the bleeding edge.

Be prepared for a journey that will pay dividends. But a journey nonetheless.

2. Picking Tech with No Clear Criteria

“Let’s add an AI chatbot” is one thing, but deciding on what that actually means is another.

Did you see a flashy demo and get inspired? Clearly link it to your product, its goals, and how this makes them more achievable. Not just to impress customers, or shareholders.

3. Sitting Out AI Completely 

Soon, asking someone what their “AI strategy” is will sound like asking Apple what their “technology strategy” is. It’ll be a foregone conclusion that every company has one. 

Don’t be left behind!

Achieve AI Success

We’ve put a lot of thought into AI. We’ve got more than a decade of experience working with some of the biggest corporations on the planet, and we’d love to help you on your AI journey.

Reach out if you’re interested.

If you like this, download the full resource here.
PDF This Page
Investing in Machine Learning with Confidence: Steps to Secure ROI
View this Resource as a FlipBook For Free
Investing in Machine Learning with Confidence: Steps to Secure ROI
Download Now For Free
contact us

Have a Project?
Talk to the
Founders Directly

It's free, what do you have to lose?