OpenAI Introduces GPT-4o Fine-Tuning, Enhancing AI’s Role in Business
OpenAI has unveiled an update to its artificial intelligence (AI) capabilities. Developers can now fine-tune GPT-4o, the company’s most advanced language model, to suit specific business needs. This feature, long sought by the tech community, opens doors for customized AI applications across various industries.
Fine-Tuning and Why It Matters
Fine-tuning is a process that allows developers to customize a pretrained AI model for specific tasks or domains. OpenAI explains, “Developers can now fine-tune GPT-4o with custom datasets to get higher performance at a lower cost for their specific use cases. Fine-tuning enables the model to customize the structure and tone of responses or to follow complex domain-specific instructions.”
The fine-tuning capability is crucial for commerce because it allows businesses to create AI tools tailored to their needs, improving customer service, streamlining operations, and driving innovation. For example, a retail company could fine-tune GPT-4o to understand product descriptions and customer inquiries better. At the same time, a financial firm might customize it to analyze market trends and generate reports.
OpenAI notes, “Developers can already produce strong results for their applications with as little as a few dozen examples in their training dataset.” This suggests that even small businesses or organizations with limited data can benefit from this technology.
OpenAI offers an incentive to encourage adoption: “1M training tokens per day for free for every organization through September 23.” In AI terms, tokens are pieces of words that the model processes. This offer provides free processing power, lowering the barrier to entry for businesses interested in exploring AI customization.
How It Works, What It Costs
The fine-tuning process for GPT-4o is now accessible to developers on all paid usage tiers. OpenAI has provided specific pricing details: “GPT-4oo fine-tuning training costs $25 per million tokens, and inference is $3.75 per million input tokens and $15 per million output tokens.”
Training teaches the model your specific data and requirements to clarify these terms. Inference is using the trained model to generate responses or perform tasks. Input tokens are the words or pieces of words you feed into the model, while output tokens are what the model generates in response.
A mini version, GPT-4o mini, is also available with 2 million free training tokens per day during the promotional period, making it more accessible for smaller projects or businesses.
While OpenAI’s announcement marks a significant development, it’s worth noting that fine-tuning capabilities are not unique to GPT-4o. Other prominent language models also offer fine-tuning options. For instance, Google’s BERT (Bidirectional Encoder Representations from Transformers) has long been available for fine-tuning, particularly for natural language processing tasks. Hugging Face, a popular AI community and model hub, provides tools for fine-tuning many models. Meta’s LLaMA model, released as open source, also allows for fine-tuning.
Early Adopters
While the long-term impact of this technology is still unfolding, early adopters report encouraging outcomes. Cosine, a company developing an AI software engineering assistant called Genie, claims to have achieved top scores on the SWE-bench benchmark, a measure of software engineering capabilities.
OpenAI’s announcement highlights Cosine’s achievement: “With a fine-tuned GPT-4o model, Genie achieves a SOTA score of 43.8% on the new SWE-bench Verified benchmark, announced last Tuesday.” SOTA stands for “state of the art,” indicating the highest performance achieved on a specific task.
Similarly, Distyl, an AI solutions provider, reports first place on the BIRD-SQL benchmark, which evaluates text-to-SQL conversion capabilities. According to OpenAI, “Distyl’s fine-tuned GPT-4oo achieved an execution accuracy of 71.83% on the leaderboard and excelled across tasks like query reformulation, intent classification, chain-of-thought, and self-correction, with particularly high performance in SQL generation.”
The introduction of GPT-4o fine-tuning raises questions about data privacy and AI safety. OpenAI addresses these concerns: “Fine-tuned models remain entirely under your control, with full ownership of your business data, including all inputs and outputs. This ensures your data is never shared or used to train other models.”
The potential impact on commerce and industry practices is big. As OpenAI states, “This is just the start — we’ll continue to invest in expanding our model customization options for developers.”
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.
The post OpenAI Introduces GPT-4o Fine-Tuning, Enhancing AI’s Role in Business appeared first on PYMNTS.com.