Today we delve into the latest developments in generative AI, which have been gaining momentum over the past few weeks. As you might recall, OpenAI introduced its language model interface, ChatGPT, last fall. This interface allowed non-technical users to harness the power of large language models, with the core technology being InstructGPT.

Developing such models requires massive resources, both in terms of time and financial resources. Consequently, up until now, only large tech companies could afford to create them, leaving smaller enterprises and individuals with little choice but to rely on ChatGPT.

However, this landscape shifted dramatically about a month ago when Facebook/Meta unveiled their own, 65 billion parameter model named LLaMa. Why? Because instead of providing an interface like ChatGPT, they made LLaMa available as non-commercial open source software. And in doing so, essentially gifted a top-tier language model to the public.

This move set off a chain reaction in the AI world.

Fine-tuning is a process in AI that involves adjusting an existing model to better suit specific needs. Language models are made up of mathematical probabilities related to words and phrases. By altering these probabilities, you can create a more specialized model. LLaMa, with its high-quality foundation, became an ideal starting point for a wave of innovation.

Imagine generative AI as pizza. Previously, you had to order from OpenAI’s ChatGPT, as it was the only viable option. Crafting your own pizza (model) from scratch was too difficult and expensive. LLaMa is like a pre-baked pizza kit, allowing you to add your desired toppings without the hassle of creating the base.

In the weeks since LLaMa’s release, numerous new models have emerged. These models boast high performance and can even run on modest hardware like a Raspberry Pi. With fine-tuning capabilities, they can now cater to various fields, including research, healthcare, and finance.

Projects like GPT4ALL now offer private chat instances on your laptop, providing the advantages of ChatGPT without relying on an internet connection or third-party data usage. This ensures privacy, making it a perfect fit for industries that handle sensitive information.

This development has caused quite a stir in the tech community, with some arguing that companies like Google and OpenAI aren’t paying enough attention to open source. As open source technology democratizes access and offers numerous benefits, such as cost-efficiency and enhanced security.

So why did Facebook do it? Because they’ve been struggling to catch up, and by releasing LLaMa as open source, they leveraged the global community to further develop the model. This approach has already led to incredible advancements, with developers fine-tuning LLaMa for chat, speed, and compatibility with various devices. Essentially, the community conducted Facebook’s R&D for free.

So now let’s talk about how these recent developments will impact marketing and society at large. Let’s start with the marketing side of things. Until recently, deploying high-quality, large language models in marketing contexts, such as chatbots on websites, usually required using OpenAI’s APIs. However, with the release of LLaMa and numerous free, open-source models, that’s no longer the case. If you have a technical team, you can use an open-source model and save a significant amount of money.

For software marketers, incorporating large language models into your software just became easier, more privacy-compliant, and nearly cost-free. Say goodbye to commercial licensing and privacy concerns. Open-source models can now be embedded in your software and run locally without privacy issues, eliminating OpenAI API fees. This is a major win for both software companies and consumers.

Marketers who have just started using ChatGPT will also benefit from these open-source models. You can now have a model that runs on your desktop or laptop with similar performance to ChatGPT but without the privacy concerns. Moreover, your organization can control the underlying model, ensuring a smoother user experience and allowing you to choose when to upgrade. This puts you in the driver’s seat.

Marketers working in regulated industries or secure workplaces that couldn’t use ChatGPT before can now approach their IT departments with open-source models that offer the same benefits without the risks. If you have access to technical resources, fine-tuning these open-source models becomes even more advantageous. The fine-tuned models can be customized for your company’s specific needs and tasks, making your work more efficient.

For marketers new to large language models, this development might complicate things, as each model is different, and their prompt structures vary. However, starting with ChatGPT and then moving to GPT4ALL with the Snoozy 13B model is a good recommended approach for newcomers.

Now, let’s discuss the broader implications of these powerful AI models. There’s no doubt that these models will be misused, just as open-source website software like Apache is misused to host hate-filled content around the world. However, this would have happened whether these open source models were available or not. Hostile nation-states, especially, have the resources to build custom models from scratch.

On the positive side, unrestricted models enable the creation of content that may have artistic value, such as erotic literature or controversial writing about sensitive topics. These models, therefore, uphold the right to free expression. We should therefore anticipate a massive increase in the use of these models, making them both more accessible and free. It is important to remember that, just like each of the technologies that have come before it, AI will amplify humanity. Both the positive, and the negative.

In conclusion, large language models are definitely here to stay, and their open-source nature now ensures that even if big AI companies disappear, the technology will remain available to everyone. However, regulating open-source software is challenging due to the decentralized nature of its distribution. Legislators and elected officials will now need to focus on the effects and outputs of these tools, rather than attempting to regulate the tools themselves. That particular ship has now sailed.

And generative AI is accelerating as a result. We can choose to ride the wave, or be consumed by it, but one thing is now certain, there’s no stopping it.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.