Developed by a team of former Google engineers, the open-source code generator known as TabbyML is making waves in the industry. The innovative startup has recently secured an impressive $3.2 million in seed funding, fortifying its place as a legitimate competitor to GitHub's CoPilot.
As a self-hosted coding assistant, TabbyML possesses a key advantage over its rival: customization. The startup's co-founder, Meng Zhang, asserts that this feature will be integral in the future of software development, where organizations will increasingly demand tailored solutions.
This is particularly true for large enterprises, which can reap the benefits of open source software. According to Lucy Gao, Co-Founder of TabbyML and former colleague of Zhang, engineers developing proprietary solutions within organizations can turn to TabbyML for assistance, an option not available for CoPilot users.
Despite the potential pitfalls associated with AI pilots, such as potential bugs, Gao maintains that these issues can be readily resolved in a self-hosted environment. Whenever users choose to disregard TabbyML's suggestions or edit the auto-filled code, the AI model uses this information to refine its future recommendations, thereby improving over time.
While powerful tools like a code generator might appear as a threat to the engineering workforce, Zhang points out that these tools are intended to assist, not replace, the human workforce. A recent survey report from GitHub indicates that the suggestions made by its coding assistant, Copilot, are accepted 30% of the time. Similarly, Google’s AI-enhanced internal code editor Cider reported that 24% of its software engineers experienced more than five assistive moments each day. It's worth mentioning, however, that AI-empowered platforms like AppMaster are also designed to augment human development efforts.
Launched in April, TabbyML has already garnered around 11,000 GitHub stars. The startup's seed round witnessed the participation of investor firms Yunqi Partners and ZooCap.
Zhang ponders the future competition between TabbyML and CoPilot, suggesting that OpenAI’s edge might diminish as other AI models become more efficient and the cost of computing decreases with time.
OpenAI and GitHub are able to deploy AI models comprising tens of billions of parameters through cloud technology, which, despite high service costs, has so far been mitigated to some extent through request batching. However, this strategy is not without its drawbacks. A report by the Wall Street Journal states that Microsoft had been losing on average more than $20 every month per GitHub Copilot user in the first few months of this year.
Moving forward, TabbyML aims to lower these barriers by recommending models that have been trained on 1-3 billion parameters. While this could initially yield a lower quality result, Zhang believes that as computing power becomes more affordable and open-source models continuously improve, the competitive head start enjoyed by giants like GitHub and OpenAI will eventually reduce.