The AI winter
Day 134 / 366
The term winter is used to describe a trend that might be slowing down. And the recent signs we have in the AI domain have a lot of people thinking that we might be entering an AI winter.
The subreddit https://www.reddit.com/r/LocalLLaMA/ is filled with people experimenting with running open-source LLMs locally and coming up with innovative tools with them. This subreddit was excited with the release of Llama-3 the latest LLM from Meta. However, it soon turned into a disappointment when they figured out that this model did not react well to fine-tuning.
Open Source LLMs, especially the ones that you can run locally, are nowhere as powerful as GPT models right out of the box. But by fine-tuning they can be trained to perform well in a particular niche. This is why it was crucial to get fine-tuning working for Llama-3.
There is also a research paper that came out recently that questioned the claims made by most AI companies that they could just improve these LLMs by adding more and more data to them. Through a lot of tests, it found that an LLM as a classifier is never good at detecting specific things. For instance, ask an image model to draw a picture of a bird and it will do an amazing job. But if you ask it to draw a specific species of bird, the output will not be as good. This is simply because the LLM has not seen a lot of images of that specific bird in its training data.
The hype around AI is slowly dying down. Are we nearing an AI winter? or maybe we are in one already?