OpenAI, the billion-dollar startup that started a generative AI revolution with ChatGPT, is projected to lose $5 billion in 2024. Despite this negative figure, the company was recently reported to be in talks to raise more funding, with its valuation said to soar to $100 billion after a $1 billion injection.
Keep in mind that this is just one company training an AI model, and there are many others facing similar financial difficulties. Artificial intelligence remains the hottest trend in the tech industry, but it is highly volatile and can run out of funding like there's no tomorrow. A panel of scientists and engineers estimates that 80% of these projects fail, highlighting the reasons why, as well as offering some solutions.
One of the reasons AI projects fail is because company founders don't understand what problem to solve and are only focused on showing the technology to others.
The RAND Corporation, a US non-profit international policy think tank, research institute, and public sector consulting firm, pointed out five reasons why 80% of AI projects fail. The first reason is that “industry stakeholders” misunderstand the problem that AI should solve. Another reason projects fail is because companies don't have enough data to effectively train AI models, which distorts the results and discourages users from using the platform again.
Further issues such as inadequate infrastructure can cause AI project failure rates to soar. And assuming resources are abundant, company founders are more focused on demonstrating technological superiority over competitors than on delivering value to users. While we can identify the remaining reasons that accelerate project failure, the RAND Corporation offers some solutions to mitigate the risk of failure.
One of them is investing in infrastructure. Focusing on this area will not only reduce the time it takes to complete training of an AI model, but it will also provide the significant benefit of higher quality data that can be utilized to effectively train other AI models. Founders must also understand that artificial intelligence is not a magic bullet and has limitations.
Effectively training an AI model results in a stronger product, but ChatGPT is a good example of how it can produce erroneous results even when trained on terabytes of data. A total of seven solutions are outlined in the report, so check them all out and let us know in the comments whether you agree with these solutions or not.
News source: RAND
Share this story