A reminder to Microsoft
A reminder to Microsoft:
- The modelβs multilingual capabilities are poor
- The model always loops the same content
- I have done deep zooming and fine-tuning based on the old phi3.5 mini, and felt the real computing and reasoning capabilities (but the larger phi4 does not have this capability)
- Microsoft should try to incorporate more knowledge from its own search engine into the data (rather than being overly "safe (useless)")
- Thank you for open source! I also look forward to Microsoft bringing more shocking open source models to the world!
What do you mean by 2. exactly? @win10 . How do you like this model compared to others, care to explain? Are you using it with Chinese perhaps? My first findings are quite positive, especially for the size.
What do you mean by 2. exactly? @win10 . How do you like this model compared to others, care to explain? Are you using it with Chinese perhaps? My first findings are quite positive, especially for the size.
The ability for Chinese is very weak
And "too safe" is simply not suitable for daily use and literature.
They overly use synthetic data, which is too clean of a distribution, leading to good eval but poor performance. In Phi view, the world is full of rainbows, but in reality, it's full of grammar errors and raw HTML.