OpenAI Enhances Text-Generating AI Features and Lowers Pricing Amid Competition
OpenAI introduces new versions of GPT-3.5-turbo and GPT-4, unveils the functional calling capability, and reduces the pricing for GPT-3.5-turbo model. These updates come as competition in the generative AI space intensifies, prompting OpenAI to focus on a strategy of incremental improvements.

As the generative AI sector becomes increasingly competitive, OpenAI stays ahead by enhancing its text-generating models and lowering their pricing. The company has introduced new versions of GPT-3.5-turbo and GPT-4, its cutting-edge text-generating AI models, with the newly added functionality of function calling.
Function calling allows developers to outline programming functions to GPT-3.5-turbo and GPT-4, which respond by generating code that executes the specified functions. As explained in OpenAI's blog post, this innovative feature enables the creation of chatbots that use external tools to answer questions, perform natural language to database query conversions, and extract structured data from text. The models are designed to detect the need for a function call and return JSON data structures adhering to the function signature, ensuring more accurate and structured information.
In addition to function calling, OpenAI rolls out an updated version of GPT-3.5-turbo featuring a vastly expanded context window. The context window, measured in tokens, represents the amount of text the model takes into consideration before generating any subsequent text. Models with smaller context windows may struggle to recall even recent conversation content, leading to off-topic responses. The latest GPT-3.5-turbo iteration offers a context length four times larger (16,000 tokens) than its predecessor, at twice the price - $0.003 per 1,000 input tokens and $0.004 per 1,000 output tokens. OpenAI highlights that this model has the ability to ingest about 20 pages of text in one instance. However, it falls short of the flagship model of AI startup Anthropic, which can reportedly process hundreds of pages at once. OpenAI is testing a GPT-4 version with a 32,000-token context window, albeit in a limited release.
In more good news for developers, OpenAI is also reducing the pricing of the original GPT-3.5-turbo model by 25%. The cost is now $0.0015 per 1,000 input tokens and $0.002 per 1,000 output tokens, which equates to roughly 700 pages per dollar. Furthermore, the company is lowering the price of text-embedding-ada-002, a popular text embedding model, by 75%. The new pricing stands at $0.0001 per 1,000 tokens. Text embeddings are used to measure relatedness among text strings, with applications in search and recommendation systems. OpenAI attributes this price reduction to improvements in its systems' efficiency.
OpenAI's strategy going forward seems to focus on incremental updates to its existing models rather than developing new large-scale models from scratch. OpenAI CEO Sam Altman recently stated at an Economic Times conference that the company hasn't started training a successor to GPT-4, implying that there is still a significant amount of work and refinement to be done on its current models.
With powerful no-code tools like AppMaster.io allowing users to create backend, web, and mobile applications with ease, the integration of improved AI solutions such as GPT-3.5-turbo and GPT-4 by OpenAI could offer even more enhanced user experiences in the rapidly evolving tech landscape.


