CTGT Unveils AI Platform to Combat Bias and Hallucinations in Models

In a significant advancement in artificial intelligence, CTGT, Inc. announced the launch of its upgraded AI platform, aimed at mitigating bias and hallucinations prevalent in AI models, including its DeepSeek technology. This announcement was made during the VentureBeat Transform 2025 conference held in San Francisco from June 24-25, 2025. The new platform has demonstrated remarkable efficacy, allowing DeepSeek to accurately answer 96 percent of sensitive questions, a substantial improvement from the previous rate of 32 percent.
CTGT was founded in February 2025 with initial funding of $7 million from notable investors, including Gradient (Google's early-stage AI fund), General Catalyst, Y Combinator, and prominent figures such as François Chollet, the creator of Keras. The company specializes in deploying AI for high-risk applications, ensuring that enterprises can leverage AI technologies without falling victim to the pitfalls of misinformation.
Cyril Gorlla, co-founder and CEO of CTGT, emphasized the urgency of addressing bias in AI systems. He stated, "In testing, CTGT was able to identify the exact model features causing bias and remove them, so that DeepSeek R1 could function without bias." This development comes at a time when AI hallucinations are increasingly concerning for enterprises. According to a report by McKinsey, $67.4 billion in global losses were attributed to AI hallucinations across various sectors in 2024. Furthermore, a study conducted by Deloitte revealed that 47 percent of enterprise AI users reported making significant business decisions based on erroneous outputs from AI systems.
The urgency to enhance AI model trustworthiness is further underscored by the concerning trend in newer AI models exhibiting higher hallucination rates. For instance, the recently launched ChatGPT 4.5 model has a documented hallucination rate of 30 percent, the highest recorded to date. In contrast, DeepSeek R1 currently maintains a hallucination rate of 14.9 percent, with much of its misinformation allegedly influenced by external factors, including state-sponsored programming from the Chinese government.
CTGT's platform leverages advanced mathematical methods developed at the University of California at San Diego, allowing for real-time bias removal and enhanced accuracy without the need for extensive retraining or fine-tuning. This enables enterprises to deploy AI models faster and more cost-effectively, with claims of deployment speeds up to 500 times quicker than traditional methodologies.
Gorlla further stated, "AI model trustworthiness is a major obstacle to enterprises achieving return on their AI investments. Each week I'm fielding calls from Fortune 500 executives wanting advice on how to solve this problem." His remarks highlight the growing recognition of the critical nature of AI reliability as businesses increasingly integrate these technologies into their operations.
CTGT's recent developments illustrate a proactive approach to addressing the shortcomings of current AI technologies, positioning the company as a leader in the quest for trustworthy AI. As enterprises continue to navigate the complexities of AI deployment, CTGT’s platform may prove instrumental in mitigating risks associated with biased and hallucinated outputs, ultimately fostering greater confidence in AI applications across industries.
Advertisement
Tags
Advertisement