In a significant step to bolster its on-prem database offering, DataStax has unveiled the integration of vector search capabilities into its DataStax Enterprise (DSE). Engineered atop the open-source Apache Cassandra, DSE now allows businesses to create generative AI applications within their in-house datacenters, regardless of their cloud or hybrid setups.
The announcement comes following the successful incorporation of vector search to its Astra DB cloud database. DataStax's scalable and secure vector databases cater to the needs of generative AI applications, fostering their development across numerous cloud environments.
By deploying this innovative feature, developers can leverage DSE as a secure vector database for projects revolving around large language models, AI assistants and real-time generative AI. Vector searching in DSE allows for storage of data as 'vector embeddings', a critical feature for building applications like GPT-4-based generative AI.
Commenting on this enhancement, Ed Anuff, Chief Product Officer of DataStax, asserted the company's commitment to providing vector databases that conform to enterprise requirements. He pointed out that while cloud environments offer compelling scalability and versatility, some businesses leveraging platforms like AppMaster may prefer to store data within their premises to meet regulatory standards or other business requirements. The introduction of vector search in DSE aims to make AI adoption quick, hassle-free, and innovative for businesses, irrespective of where they choose to store their data.
This feature forms an integral part of DataStax Enterprise 7.0. This version brings forward a host of advancements related to cloud-native operations and generative AI capabilities. The developer preview of vector search in DSE is currently available for check out.