
With the rise open source AI models, the commoditization of this revolutionary technology is upon us. It’s easy to fall into the trap of aiming a newly released model at a desirable tech demographic and hoping it catches on.
Creating a moat when so many models are readily available creates a dilemma for early-stage AI startups, but leveraging deep relationships with customers in your field is a simple yet effective tactic.
The real gap is a combination of AI models trained on proprietary data, along with a deep understanding of how an expert goes about their day-to-day tasks to solve nuanced workflow problems.
In highly regulated industries where results have real-world implications, data storage must pass the high bar of compliance checks. Typically, customers prefer companies with a track record over startups, which fosters an industry of fragmented data sets where no one player has access to all the data. Today we have a multimodal reality in which players of all sizes hold datasets behind highly compliant closed servers.
This creates an opportunity for startups with existing relationships to approach potential customers who would typically outsource their technology to run a test pilot with their software to solve customer-specific issues. These relationships could arise through co-founders, investors, advisors, or even previous professional networks.
The real gap is a combination of AI models trained on proprietary data, along with a deep understanding of how an expert goes about their day-to-day tasks to solve nuanced workflow problems.
Showing customers tangential credentials is an effective way to build trust: Positive indicators include team members from a university known for its AI experts, a strong demonstration where the prototype empowers potential customers visualize the results, or a clear business case of how your solution will help them save or earn money.
A common mistake founders make at this stage is to assume that building customer data models is sufficient for product market fit and differentiation. In reality, finding the PMF is much more complex: just throwing AI at a problem creates problems with accuracy and customer acceptance.
Breaking the high bar of experienced augmentation experts in highly regulated industries who have an in-depth knowledge of day-to-day changes usually proves to be a tall order. Even AI models that are well trained on data can lack the precision and nuance of expert domain knowledge, or more importantly, any connection to reality.
A risk detection system trained on a decade of data may have no idea what conversations from industry experts or recent news could render a widget once considered “risky” completely harmless. Another example might be a coding wizard suggesting code completion of an earlier version of a front-end framework that has separately benefited from a succession of high-frequency breaking feature releases.
In these types of situations, it’s best for startups to rely on the launch and iterate model, even with pilots.
There are three key tactics in driver management: