We are fine-tuning a lot at letterly.app, but we don't spend a huge budget on that.
My take is the following:
- To fine-tune, we can only use 3.5.
- And it is not changing a lot as the OpenAI focus on 4+ models.
So, there is no point in fine-tuning once again as new 3.5 model released. We only re-fine-tune when we have new data and approaches; then we can fine-tune on new versions of the model.
My take is the following: - To fine-tune, we can only use 3.5. - And it is not changing a lot as the OpenAI focus on 4+ models.
So, there is no point in fine-tuning once again as new 3.5 model released. We only re-fine-tune when we have new data and approaches; then we can fine-tune on new versions of the model.