Latest AI startups and venture capital firms are leaning into Minimum Viable Quality (MVQ) as a key … (+) criteria and strategy.
getty
In today’s column, I am continuing my ongoing and extensive analysis regarding what’s trending for the latest in AI startups, especially so for founders seeking venture capital (VC) funding, see my series at the link here. The focus of this discussion is the advent of minimum viable quality (MVQ) as a crucial keystone and essential AI go-to-market strategy that can determine the worthiness and success of any new AI startup. I’ve been talking about MVQ for quite a while and I’m excited to see that traction is taking hold on this all-important approach.
Allow me to emphasize the catchy phrase again, this time with added emphasis: Minimum Viable Quality (MVQ).
If you have not yet heard or seen the emerging moniker, you will soon.
To get you up to speed, I will go ahead and unpack the MVQ as it underpins AI startups and give you numerous real-world tips and insights for adopting the MVQ philosophy and its down-to-earth practicalities.
In my avid activities as a consultant/mentor to startups, plus formerly serving as a top exec at a major VC firm, my aim here is to give you insider insights. I hope that doing so will bring your budding startup to grand fruition. That’s the goal.
Keep your spirits up and pursue those dreams.
Background About MVQ
Some worthy background will be helpful to suitably set the stage.
The coined phrase of MVQ has at times been used in other contexts that weren’t necessarily AI-related. In that sense, MVQ is getting recast into the AI domain. Anyone who is versed in the sphere of quality control (QC) and the vaunted heydays, for example, of the Malcolm Baldridge National Quality Award might faintly recall the moniker.
MVQ is getting retooled and reapplied for very sensible reasons, particularly AI-related reasons. Hang in there, this will be explained momentarily.
The MVQ as a moniker ought to ring bells since it is similarly phrased as the stellar Minimum Viable Product (MVP). Everyone knows about MVP. A startup is urged to design and build its loosely conceived product as an initial prototype to the demonstrative degree that you have at least a minimum to viably showcase what it is and what it can accomplish.
This makes life easier. Prospective investors can see a tangible artifact. Prospective customers can kick the tires. You shift from a purely conceptual pitch to one that entails the rubber meets the road, as it were. I dare say nearly anyone pitching an AI startup must put together a viable MVP, namely, it can’t be less than a bona fide MVP else you won’t seemingly get to first base.
MVQ leverages that familiar ring, though the purpose differs from that of the MVP. The crux for MVQ is that whatever product or service your startup is devising, you go ahead and define what the minimum of quality is viably necessary for that product or service to be acceptable to the marketplace.
Whoa, some say, shouldn’t quality be at 100%? There ought to not be any debate or argued discourse on that criterion. The envisioned product or service must get things right all the time, each and every time. No one can ever compromise on quality. Period, end of story.
That’s not the right kind of thinking for modern times when it comes to leaning into the outsized benefits of contemporary AI. The world of generative AI and large language models (LLMs) is not an on-off-binary kind of place.
Furthermore, I’ll let you in on a secret.
Even legacy systems and software have always had various ranges of quality. To some degree, this has been a hidden matter. Quality tended to be an unspoken don’t ask and don’t tell characteristic. Efforts to infuse software quality assurance (SQA) into programs and applications lamentedly have been a step beneath the brasher and bolder declarations that as long as the system works, it’s good to go. The recent CrowdStrike software incident, some say epic fail, serves as a striking reminder that software quality is not a solved problem.
Anyway, let’s shift into AI mode to see why MVQ is vital for AI-based systems and startups devising AI at the core of their products or services.
Generative AI And LLMs Are Non-Deterministic
I will begin by setting the record straight about what today’s AI consists of. There are myths that need to be busted.
Let me first set the record straight on one very significant facet. None of today’s AI is sentient. Sorry to break the jarring news to you. I mention this since there are lots of headlines that seem to proclaim or suggest otherwise.
AI is a mathematical and computational construct or mechanization that just so happens to often seem to act or respond in human-like ways. Be very careful when comparing AI to the nature of human capabilities, which for example I delicately cover and differentiate in my recent discussion about inductive and deductive reasoning associated with AI versus that of humans, at the link here. Another handy example is my recent coverage of AI that purportedly has “shared imagination” with other AI, see the link here.
The gist is that there is way too much anthropomorphizing of AI going on.
I want to next bring up the overall topic of generative AI and large language models (LLMs). I’m sure you’ve heard of generative AI, the darling of the tech field these days.
Perhaps you’ve used a generative AI app, such as the popular ones of ChatGPT, GPT-4o, Gemini, Bard, Claude, etc. The crux is that generative AI can take input from your text-entered prompts and produce or generate a response that seems quite fluent. This is a vast overturning of the old-time natural language processing (NLP) that used to be stilted and awkward to use, which has been shifted into a new version of NLP fluency of an at times startling or amazing caliber.
The customary means of achieving modern generative AI involves using a large language model or LLM as the key underpinning.
In brief, a computer-based model of human language is established that in the large has a large-scale data structure and does massive-scale pattern-matching via a large volume of data used for initial data training. The data is typically found by extensively scanning the Internet for lots and lots of essays, blogs, poems, narratives, and the like. The mathematical and computational pattern-matching homes in on how humans write, and then henceforth generates responses to posed questions by leveraging those identified patterns. It is said to be mimicking the writing of humans.
When composing responses, generative AI and LLMs make use of statistics and probabilities to choose which words are to appear in the generated essay or online interactions. A handy aspect is that the output you see is nearly one-of-a-kind. Rather than repeatedly stating the same words or sentences over and over, the use of a tinge of randomness gives the result a semblance of fluency and uniqueness. The result is said to be non-deterministic. This means that you cannot readily predict exactly what the output will be, unlike systems that work on a deterministic basis.
Please put a mental pin on that notable point of non-determinism, I’ll come back to it.
You might have heard about so-called AI hallucinations associated with generative AI and LLMs. In a sense, AI hallucinations add to the non-determinism or unpredictability of this type of AI. I disfavor the terminology since the word “hallucinations” has human-like connotations. AI hallucinations are more aptly coined as AI fabrications. The notion is that from time to time the AI might generate fictitious or fabricated content that has no grounding in facts.
There was a highly publicized instance of two lawyers who got themselves in hot water with a court when they submitted a legal brief containing falsehoods or fictitious cited legal cases that they obtained by using generative AI, see my coverage and discussion at the link here and the reactions by judges and the courts at the link here. All in all, daily there are issues expressed about the dangers of AI hallucinations. People can readily be fooled or lulled into thinking that the AI is always right and won’t output something false. Sorry, but that’s a false belief and no one should assume or take at face value that the generated content is correct. Double-checking is required.
The advent of AI hallucinations or fabrications reflects a current structural weakness in how generative AI is devised and there is a tremendous amount of AI research occurring to reduce or mitigate these vexing and serious AI issues, see my in-depth analysis at the link here. Just to let you know, some fervently assert that no matter what is done, existing ways of building and fielding generative AI are going to inextricably lead to AI hallucinations (it is claimed to be unstoppable). A counterviewpoint is that besides the possibility of overcoming this, we can use for example generative AI to double-check generative AI, generally known as compound AI, see my discussion at the link here, and another correcting path consists of surrounding generative AI with trust layers, see my coverage at the link here.
I think that the above on generative AI and LLMs is sufficient for the moment as a quick overview.
Take a look at my extensive coverage of the technical underpinnings of generative AI and LLMs at the link here and the link here, just to name a few, if you’d like more details.
Connecting MVQ And The Realm Of Today’s AI
You might have been able to read between the lines and observed that my above rendition of generative AI and LLMs noted that generated outputs can be somewhat unpredictable. That’s the non-determinism I mentioned. This can happen by design in the sense that the outputs are being cobbled together via statistics and tinges of randomness. This can happen by undesirable happenstance in the instance of AI hallucinations or fabrications.
Non-determinism in this case is both a blessing and a curse.
You like it for the freshness and appearance of human-like responsiveness. Happy face. We don’t like it when the output contains falsehoods or otherwise isn’t particularly predictable. Sad face. In addition, testing of such a system is quite challenging. The old methods of having a set of tightly woven test cases and gauging whether the output matches precisely the anticipated outputs are not in the cards here. A given set of inputs can readily produce a differing array of outputs each time that you run the test.
Let’s directly tie this to the need for MVQ.
When an AI startup is devising a system that contains generative AI and LLMs, they must also realize and deal with the undeniable fact that non-determinism is afoot. You can’t put your head in the sand. The best approach is to face up to the reality at hand.
I bring this up because as a frequent startup pitch judge, I see founding teams that are excited about their generative AI and LLM-infused solutions, but they are often taken aback when I ask how they are dealing with the non-determinism that is at play.
A blank stare is not a good response.
One means to hold your head high is to invoke MVQ.
Here’s the deal.
You, in a plainspoken manner, acknowledge that due to the intrinsic nature of generative AI and LLMs and the non-deterministic implications, the system you are devising abides by a level of quality that is aligned with the marketplace you are targeting. It meets a suitable minimum viable quality or MVQ as per the circumstances at hand.
Now then, for you to rightly say this with sincere assurance and aplomb, you need to darn sure mightily make sure that you’ve done your due diligence and intrepid homework beforehand. Don’t make a hollow pledge of your MVQ. This will gravely be held against you upon any scrutiny of your alleged claims. You can lose your reputation, the startup, potential funding, and integrity, all in one fell swoop.
Before we get into the means and measures of MVQ, a glimpse at a case study might be insightful.
Case Study Showcasing The Value Of MVQ
Go with me on a short journey.
I was working with an AI startup that was using generative AI to do summaries of written notes by patients. The core proposition was that the AI-generated summaries were to be sent to the medical doctor or medical professionals overseeing the patient, see my coverage on this specific use case at the link here and the link here.
The business value was that medical doctors are time-crunched and time-costly, thus having them manually read the tsunamis of emails, text messages, and other missives by patients presented a problem for them and the hospitals or doctor’s offices they worked at. The doctors often couldn’t get to the reading process until late in the day, undercutting the timeliness of response. When they read the notes, they were often hurried and inadvertently might miss important subtleties. Etc.
Realize too that patient-sent notes are typically filled with confounding wording and require at times intense scrutiny to make sense of them. Another consideration is that such notes are typically contextually based. Without a grasp of the patient’s situation and context, the note might seem inexplicable.
How could this be done more efficiently and effectively instead of having the doctor read each word and laboriously decipher what the patient had to say?
Okay, in a nutshell, this seems a promising candidate for the application of generative AI and LLMs.
Generative AI can parse the notes and seek to convert the wording into something more readable and readily comprehensible. The overarching context associated with the patient can be interleaved into the translated notes. Summaries suitable for a harried medical doctor can be devised, including special quick-look formatting to give needed presence and highlights for rapid attention. And so on.
Pause for a moment and reflect on the use of such AI for this rather crucial purpose.
What if the AI misinterprets a patient note? Suppose AI hallucinates or fabricates something that wasn’t in the note. Healthcare or medical-related use of generative AI often treads into risky waters. For example, see my analysis of the tradeoffs involved in using generative AI for mental health therapy at the link here.
The bottom line is that non-determinism is in play and you can’t hide from it.
Write that down and put that on your AI startup business plan. If you are including generative AI or LLMs in your product or service, you are imbuing non-determinism into the product or service too. The good comes along with the not-so-good. Those superpowers come with vulnerability to kryptonite.
A big problem arises. If key stakeholders have an unshakable expectation of on-off quality, consisting of either the AI always working perfectly 100% of the time, or otherwise the AI is not to be used at all, you likely will have to ditch the AI usage. Indeed, in this setting, some crucial stakeholders had a kneejerk reaction that anything less than idealized summaries was completely unacceptable.
An MVQ perspective helps out.
There is a reasonable and feasible middle ground that attains a level of quality befitting this particular marketplace. You need to determine the MVQ that will be acceptable. You must then ensure that the AI matches the resolved MVQ.
This brings up a classic chicken-or-the-egg question that nearly always comes up when discussing the MVQ topic:
Should you proceed to build the AI and then see what MVQ you perchance arrive at (bouncing this off the market to hopefully be acceptable), or should you first determine what is the marketplace’s acceptable MVQ at the get-go and then see if you can build the AI to that target?
My answer is that either path will potentially work — the situation at hand greatly shapes your choice.
An AI startup that has already devised an MVP but hasn’t identified its MVQ is in the situation of having to now make forays into the marketplace and ascertain what the MVQ is for that build. The good news is that due to having an MVP, you luckily have something tangible to aid in discerning the MVQ. The bad news is that you’ve presumably already done work that might need to be redone or worse.
The MVQ-determining process at such a juncture can be especially grueling if your MVP happens to be leaps and bounds below the marketplace MVQ (ouch!). You are going to have some really tough decisions to make. For example, can you realistically bring the MVP up to MVQ? Maybe so, maybe not. Can you convince stakeholders that once the MVP becomes the true product or service the MVQ will be attained? Maybe so, maybe not.
Flipping to the other side of the coin, you can of course start by exploring the marketplace to ascertain what an acceptable MVQ would be, even if you have nothing in hand to showcase what the AI consists of. The downside is that you might get false reads on what the MVQ is. People can conceptually arrive at MVQs that aren’t based on reality. To some degree, having an MVP provides a more grounded MVQ.
Chicken or egg, it all depends.
The Essential Precepts Of MVQ
We ought to define our terminology so that we can agreeably concur on what MVQ is and how it is to be used.
I will conveniently use the online Oxford dictionary to present the definition of each respective word in MVQ. I’ll do so in reverse order of the words at hand, which will make the matter more sensible since the focus is all about quality:
(a) Quality: “The standard of something as measured against other things of a similar kind; the degree of excellence of something.”
(b) Viable: “Capable of working successfully.”
(c) Minimum: “The least or smallest amount or quantity possible, attainable, or required.”
Your job, should you choose to accept it, consists of ascertaining for your AI system the requisite quality of a viable nature that meets the minimum requirements for the targeted marketplace.
All three of those ingredients are a necessity. You ought to closely embrace the completeness theorem of minimum-viable-quality. That means you must achieve all three of those elements. It won’t suffice to come short on any of the three.
Let’s talk about it.
Quality is the mainstay topic at hand, ergo, it is an essential ingredient. Impeccable logic there.
The word “viable” is essential too. You see, the quality level has to be substantiated as viable. This means that if your AI meets the ascertained quality level, it will be considered workable or capable. If your AI isn’t at a viable quality level, it will be perceived as inviable or considered incapable. That boat won’t float.
The word “minimum” is also essential. Here’s why. Suppose you discover a viable quality level and stop there. Great, you’ve met the viability consideration. Is it at the minimum or is it above the minimum? You don’t know. Since it is construed as viable, we’ll take at face value that it meets or exceeds the minimum. We just don’t know how high above the minimum it is.
You are pleased that you have the AI aimed at a viable quality level. This indicates likely marketplace acceptance. Nice. Off you go into la-la-land.
Don’t be so hasty to get overly happy.
Inopportunely, you’ve allowed intruders to outgun you. A competitor comes along and does something similar but is at a quality level below yours, yet also hits the viability criteria. The chances are that they might have a lower cost of entry than you because they didn’t have to build in that added quality that you have in your AI. Quality is rarely free, though that’s a whole other story for a different day (philosophically, quality is said to be “free” in that later costs due to poor quality can far exceed what you might have spent upfront to have suitable quality).
All in all, you want to find out what is the minimum. You are welcome to aim higher. Do not though get caught off-guard by not knowing what the minimum is. Minimum gets you in the game. Going beyond the minimum is added gravy. Meanwhile, have your hands on what the acceptable floor is.
Crucial Questions About Your MVQ
Play along with me on an engaging and exciting scenario.
You are a founder or a member of a startup team. The AI product or service that you’ve envisioned is absolutely incredibly superbly amazing. The knock-your-socks-off variety.
The product or service is primarily shaped around generative AI and LLM, or at least has a substantial component of that ilk. You all feel ready to get some funding. Success is in the air.
You manage to get a make-or-break opportunity to do a grand reveal pitch to interested and deep-pocketed prospective investors, perhaps a well-known VC or PE (private equity) firm, and maybe others eager to see what you are cooking up. The spirited energy around this is electric and palpable.
Are you ready to make that pitch?
If you haven’t thought about MVQ, I would say that you are taking a hugely dreadful gamble. You are silently praying that nobody brings up the MVQ topic. Dread resides in your mind, or at least ought to.
That being said, the dice might roll your way, this time. I would say that many VCs and investors don’t yet know about MVQ. You might get lucky and no one there is up to snuff. Yep, you managed to avoid a deadly probing disaster. Head to Vegas and put money on the tables there. Perhaps you are on a winning streak.
To me, you aren’t ready if you haven’t done your due diligence on MVQ.
Alright, you might be thinking, what do you have to do to be ready for MVQ considerations?
I’m glad you asked, thanks.
Here are the fundamental questions that I ask regarding MVQ:
(1) What is your MVQ?
(2) How did you determine the MVQ?
(3) Why will the marketplace find acceptable your MVQ?
(4) Are there competitors with similar, better, or worse MVQs?
(5) Where does the MVQ tend to reach its lower bound limits?
(6) Is there a max(MVQ) that can ultimately be reached?
(7) Can you showcase the MVQ via your MVP?
I kept this to my top lucky-seven questions. There are plenty more questions at the ready.
My rule of thumb is that if someone seems ready for those seven, they probably can handle the rest of my MVQ questions. They have indubitably prepared themselves sufficiently for an inquisition. As they say, preparation is half the battle. Maybe more.
My recommendation is this.
Get underway right now, this moment, not a second later, on figuring out your MVQ, assuming that you have an AI startup or are actively intending to pursue one. Plus, learn more about MVQ. Make MVQ integral to your AI startup endeavors. Seek out others who can expertly help you with your MVQ deliberations. Etc.
Whatever you do, please, please, please don’t hide your head in the sand.
That just won’t do.
Growing Attention To MVQ Provides Proof In The Pudding
MVQ is gaining steam.
Allow me a moment to give some proof in the pudding.
Recently, as a former Stanford Fellow, I attended a special panel session entitled “The Role Of Business: Policy Implications Of Industry Leadership In AI” that was organized by the renowned Stanford University Institute for Human-Centered AI (known widely as HAI, see the link here for information about this incredible Stanford University entity that focuses on advances in AI and does so in an innovative and exemplary multi-disciplinary way).
A distinguished panel member, Sarah Guo, Founder and Managing Partner of the VC firm Conviction, brought up the MVQ topic. You might recognize her name. She was a General Partner at Greylock Partners, a legendary and luminary VC firm, and subsequently founded her own VC firm known as Conviction. As she notes on her company website, see the link here, Conviction “makes early-stage venture capital investments in extraordinary technology startups”.
This is certainly a fitting mission since Sarah Guo is an extraordinary venture capitalist.
I was elated that amongst her insightful remarks about the current state of AI startups and VC funding, she mentioned minimum viable quality or MVQ as an important attribute. Her comments about the considerations of generative AI and LLMs covered the gamut of what to do if the model won’t do what you want it to do, along with the “good enough” premise when dealing with non-deterministic AI.
This reminded me of the famed Herbert Simon’s coining of the term “satisficing” – an innovative concept at the time that revolutionized conventional decision-making theories. He was awarded a Nobel Prize in Economics for his research. In brief, satisficing was stated as “decision makers can satisfice either by finding optimum solutions for a simplified world, or by finding satisfactory solutions for a more realistic world” (source: “Rational Decision Making In Business Organizations” by Herbert Simon, The American Economic Review, 1979).
MVQ is the means and mode of devising, building, and fielding AI that provides satisfactory solutions for a realistic and hardcore world.
Conclusion
Congratulations, you are now aware of MVQ.
Pat yourself on the back. You are part of a growing segment of AI startups, founders, investors, VCs, PEs, and the like who are figuring out that there are tons of opportunities to employ generative AI and LLM applications, but only if you know how to viably and successfully do so.
Those who don’t adopt an MVQ mindset are bound to skip past opportunities that are beyond their thinking. An on-off perspective of whether AI will work in a given business setting is like being trapped inside a box. Your viewpoint staunchly informs you that non-deterministic AI is out of the question. Don’t tread there is what the little boxed-in voice says.
I say, get outside of the box.
Think outside the box via MVQ.
As Albert Einstein wisely noted: “You have to learn the rules of the game. And then you have to play them better than anyone else.”
Get going and make sure your AI startup is victorious and prosperous.
I’m determinedly 100% sure you can do it.