Known for its innovative solutions in designing tools to evaluate, benchmark, and validate AI models' performance, Kolena recently declared successfully securing a promising fund amount of $15 million. Key investors in the funding round chaired by Lobby Capital was the likes of SignalFire and Bloomberg Beta.
The fresh injection of capital has escalated Kolena’s total funds to a staggering $21 million. Aimed to accelerate the company's growth trajectory, the funds will propel research activities, alliances with regulatory authorities, and drive sales and marketing initiatives. The news was confirmed by the firm’s co-founder and CEO, Mohamed Elgendy, in a recent email interview with TechCrunch.
Elgendy highlighted the vast potential of AI applications but drew attention to the underlying skepticism from developers and the public about trusting AI. He emphasized the need for a methodical and effective deployment of this evolving technology that enhances and not impair the digital experience. His view stems from a belief that the industry needs to nurture and guide tech advancements to serve right and not just part ways due to misguided utilization.
Collaborating with Andrew Shi and Gordon Hart, who contributed their six-year expertise from AI divisions at Amazon, Palantir, Rakuten, and Synapse, Elgendy launched Kolena in 2021. The brilliant minds came together envisaging a 'model quality framework'. The crux of the idea was to facilitate unit testing and comprehensive testing for models translating to an enterprise-friendly form.
Elgendy envisions Kolena as a transformative tool in the present landscape of model quality framework. The focus remains on empowering teams to perpetually conduct scenario-based or unit tests, along with end-to-end testing of the whole AI and machine learning systems, not merely the subsets.
Kolena proves advantageous for deciphering the lacunae in the AI model test data coverage. The platform integrates risk management features to monitor the risks coming with the application of a particular AI system. User experience gets a further boost with Kolena’s UI that enables users to formulate test cases for assessing a model’s productivity and identify possible causes that impede a model's performance while simultaneously comparing its performance with other models.
Unlike conventional strategies that dwell on generic parameters like an accuracy score, with Kolena, teams can manage and conduct tests for specific scenarios that an AI application will have to face. Elgendy affirms the uniqueness of each model with different strengths and weaknesses, and therefore, an accuracy score of a model in detecting cars can't necessarily affirm its effectiveness in varying aspects.
If Kolena delivers as per its promise, it could become a game-changer for data scientists who expend considerable time and effort in building models for underpinning AI apps. Elgendy asserts that Kolena’s platform is one of the very few that offer complete command over the data types, evaluation logic, and other components integral to an AI model test.
Elgendy throws light on Kolena’s privacy assurance, which nullifies the need for customers to upload their data or models on the platform. In fact, Kolena only preserves model test outcomes for future benchmarking which may be deleted on request.
Elgendy regards robust testing as critical for negating risks from an AI and machine learning system before its deployment. Lamenting the current scenario of casual model testing and frequent failures in machine learning proof of concepts, he affirms Kolena’s commitment to a thorough model evaluation. Hence, offering unparalleled visibility into a model’s test coverage and product-specific functional requirements to machine learning managers, product managers, and executives, thereby enabling them to proficiently influence product quality right from the beginning.