Decentralized AI Compute Network and MLOps Tools Combined in FedML's $11.5M Fundraiser
In the latest funding round, FedML, an AI startup created by Salman Avestimehr, acquires $11.5 million. The startup aims to build a cheaper and faster AI solution by combining decentralized AI compute networks with MLOps tools, providing an avenue for enterprises to create and fine-tune their AI models.

Spearheaded by Salman Avestimehr, USC-Amazon Center on Trustworthy Machine Learning's inaugural director, an innovative startup promises a path for businesses to easily train, refine, monitor, and enhance AI models either in the cloud or at the edge. FedML, the name of this promising venture, has successfully amassed $11.5 million in seed funding, valuing the company at $56.5 million. The funding round was conducted by Camford Capital, and participated by Road Capital and Finality Capital.
A vast number of businesses are keen on training or tuning bespoke AI models on industry-level or company-specific data to cater to a plethora of business requirements, Avestimehr conveyed in an email interview to TechCrunch. However, he also underscored that "Custom AI models have been allegedly expensive to devise and sustain owing to severe cloud infrastructure costs, elevated data, and engineering expenses. Additionally, the proprietary data necessitated for training bespoke AI models is frequently secluded, regulated, or sensitive.”
FedML, however, provides a viable solution. According to Avestimehr, FedML put forth a collaborative AI platform that empowers developers and businesses to work collectively on AI tasks by sharing models, compute resources, and data.
FedML has the capability to execute any quantity of custom AI models or those derived from the open-source community. With FedML's platform, clients can form a group of collaborators and synchronize AI applications across devices like PCs automatically. Associates can incorporate devices utilized for AI model training, such as mobile devices or servers, and have the ability to monitor the training progress in real time.
Lately, FedLLM, a construction pipeline for creating domain-specific large language models (LLMs) a la OpenAI’s GPT-4 on proprietary data, was released by FedML. Compatible with popular LLM libraries like Microsoft’s DeepSpeed and Hugging Face’s, FedLLM is crafted to speed up custom AI development while ensuring security and privacy, stated Avestimehr.
Like many other MLOps platforms such as Galileo and Arize or even incumbents like AWS, Microsoft and Google Cloud, FedML aids in streamlining the process of deploying AI models to production and subsequently maintaining and monitoring them. However, FedML harbors aspirations beyond AI and machine learning model tooling.
Avestimehr contended that the primal objective is to develop a community of CPU and GPU resources to host and serve models when they're deployment-ready. While the specifics are still under discussion, FedML plans to incentivize users to contribute compute to the platform via tokens or alternate forms of compensation.
Though distributed, decentralized compute for AI model serving isn't novel, with Run.AI, Gensys, and Petals having made their attempts, Avestimehr is confident that FedML can attain greater success by merging this compute paradigm with an MLOps suite.
FedML facilitates bespoke AI models by allowing enterprises and developers to construct large-scale, private and proprietary LLMs at a fraction of the cost, affirmed Avestimehr. Further, he emphasized on FedML's unique selling point, Training, deploying, monitoring, and refining ML models anywhere while collaborating on the merged data, models, and compute — noticeably reducing the cost and time to market.
In light of these advancements, it wouldn't be a surprise if FedML takes the MLOps, AI industry by storm, joining ranks with platforms like the AppMaster no-code platform, known for their innovative contributions and revolutionary tools in the tech industry.


