An Alternative to ChatGPT That is Open Source, but Beware

Table of Contents

chatGPT alternative

From the same guy who reverse-engineered Meta’s closed-source AI system Make-A-Video; the first open source equivalent of OpenAI’s ChatGPT has arrived. Philip Wang, the developer released this to the world of programmers and coders for free for future contributions and collaborations. This open source AI system comprised of PaLM + RLHF, a text-generating model that behaves similarly to ChatGPT. This means that the system makes use of the large language model from Google called PaLM and a technique called Reinforcement Learning with Human Feedback or RLHF for short; to to create a system that can accomplish pretty much any task that the ChatGPT could do. Which amazingly ranges from drafting emails to suggesting computer codes to build and develop programs.

Palm+RLHF, an alternative to ChatGPT

But PaLM + RLHF isn’t pre-trained, this is adownside for people who want to just plug-n-play it. The system hasn’t been trained on the example data from the internet, which is necessary for it to actually work. This means that unless the user is an experienced programmer in Artificial intelligence and Machine Learning (ML), downloading PaLM + RLHF onto your PC won’t magically install the ChatGPT-like experience the user is looking for. To achieve this feat, it would need to compile gigabytes of text from which the model can learn and find hardware with the capacity to handle such a training workload.

chatGPT alternative that is open source

This two AI systems, ChatGPT and PaLM + RLHF share a special similarity in Reinforcement Learning with Human Feedback, a technique that aims to better align language models with what users wish them to accomplish. Since RLHF involves training a language model, when applied in PaLM + RLHF’s case; PaLM, becomes a fine-tuning tool to it as a dataset that includes prompts (e.g., “Explain machine learning to a six-year-old”) paired with what human volunteers expect the model to say (e.g., “Machine learning is a form of AI…”). The aforementioned prompts are then fed to the fine-tuned model, which generates several responses, and the volunteers rank all the responses from best to worst. Finally, the rankings are used to train a “reward model” that takes the original model’s responses and sorts them in order of preference, filtering for the top answers to a given prompt.

Read More:

HOW TO MAKE MONEY ON ONLYFANS

How To Get Jobs Abroad Through Lottery

How to Update WordPress From cPanel

It is quite an expensive process to pull off. From collecting the training data to training itself is expensive. When estimated, PaLM is 540 billion parameters in size. When referring to “parameters” here, we are referring to the parts of the language model learned from the training data. In a 2020 study, the expenses calculated for developing a text-generating model was recorded to be 1.5 billion parameters, making a total of $1.6 million. Training the open source model Bloom alone, which has 176 billion parameters, took about three months that was running on 384 Nvidia A100 GPUs; and a single A100 costs thousands of dollars.

Conclusion

Creating a ChatGPT alternate-trained model of the size of PaLM + RLHF is also not easy. Just as Bloom requires a dedicated PC with around eight A100 GPUs; it could be done in the same way. When cloud alternatives are considered, it is quite pricey. However, the idea would be nice to implement and PaLM+RLHF is just one thing that is here to give us a head start.

Leave a Reply

Your email address will not be published. Required fields are marked *