Apr 26, 2024

What is the Recently Released Microsoft Phi-3 AI Model and How to Get Access to It

Microsoft has introduced a new AI model called Phi-3. This model can process text quickly and works well even on iPhones, handling 12 tokens every second. In this short article, we shed the light on details about Phi-3, as well as show a simple way of how new Microsoft's model can be used.

Microsoft Phi-3

New AI Model by Microsoft

Microsoft has launched a new AI called Phi-3, which is noteworthy for its ability to run smoothly on iPhones, processing text quickly at 12 tokens per second. This model is drawing attention because of its strong performance. The Phi-3 mini version competes closely with larger models like LLaMa 3 8B, despite being smaller. The medium Phi-3 is also performing well, comparable to the Mistral 7B.

How Phi-3 Is Trained

Unlike earlier Phi models which used mostly synthetic data, Phi-3 includes a lot of data from the internet. Microsoft trains it in two stages: initially, it learns from highly filtered data, and then it learns from a mix of synthetic and even more selectively filtered internet data. This two-step process helps the model learn more effectively and think more deeply.

Challenges with Size and Learning

One interesting approach by the developers is the Data Optimal Regime, where they try to leave out data that a large AI model wouldn't typically learn, like the outcomes of specific sports games. However, there's a notable challenge with this method: after the model grows beyond 7 billion parameters, improvements are hard to come by. The 14 billion parameter version does not show significant improvements over the 7 billion one. This suggests two possibilities: either the model has already learned everything it can from the data provided, or it has reached its limit in terms of learning from the benchmarks included in its training set.

Current Release and Initial Performance

Only the smallest version, the Phi-3 mini, has been released so far. It recently went head-to-head with other models in the ChatBot Arena, a famous platform where all the AI model are compared. While it didn't outperform the top model LLaMa 3 8B, it held its own and matched up well against the Mistral 7B, proving its competitiveness.

ChatBot Arena Phi-3

Efficiency and Future Potential

In a performance test, Phi-3 showed it could process inputs incredibly fast on an M3 Max chip in fp16, demonstrating its efficiency and speed. This indicates that while the model is already fast, there might still be potential to optimize its performance even further.

Phi-3 has sparked a lot of interest and discussion within the AI community about its capabilities, its unique approach to learning, and its potential applications in various fields. Be patient, more much stronger versions of Phi-3 models are to come!

How to use Phi-3

You can try Phi-3 Mini model on HuggingFace, and it's extremely easy.

  1. Just go to HuggingFace Chatbot.

  2. Sign up.

  3. Pick microsoft/Phi-3-mini-4k-instruct in the model list and start chatting!

HuggingChat Phi-3


Will Phi-3 Be Added to ChatLabs AI?

At ChatLabs, we're known for offering a wide variety of AI models. Our lineup includes more than 30 different models such as GPT-4, Gemini Pro 1.5, Groq, Llama 3, Claude 3 Opus, Mistral, and many others. We're always looking to expand our collection with the latest and most impressive AI technologies. Our team works fast, usually adding new large language models (LLMs) to our platform within just 1-2 days, which is quicker than most of our competitors.

The Microsoft Phi-3 model will soon be part of our platform. We invite you to keep an eye out for updates and be ready to try out the newest AI technologies that ChatLabs has to offer.


Useful Links

Technical Report on Arxiv.org
Phi-3 Mini 4k model
Phi-3 Mini 128k model

Apr 26, 2024

What is the Recently Released Microsoft Phi-3 AI Model and How to Get Access to It

Microsoft has introduced a new AI model called Phi-3. This model can process text quickly and works well even on iPhones, handling 12 tokens every second. In this short article, we shed the light on details about Phi-3, as well as show a simple way of how new Microsoft's model can be used.

Microsoft Phi-3

New AI Model by Microsoft

Microsoft has launched a new AI called Phi-3, which is noteworthy for its ability to run smoothly on iPhones, processing text quickly at 12 tokens per second. This model is drawing attention because of its strong performance. The Phi-3 mini version competes closely with larger models like LLaMa 3 8B, despite being smaller. The medium Phi-3 is also performing well, comparable to the Mistral 7B.

How Phi-3 Is Trained

Unlike earlier Phi models which used mostly synthetic data, Phi-3 includes a lot of data from the internet. Microsoft trains it in two stages: initially, it learns from highly filtered data, and then it learns from a mix of synthetic and even more selectively filtered internet data. This two-step process helps the model learn more effectively and think more deeply.

Challenges with Size and Learning

One interesting approach by the developers is the Data Optimal Regime, where they try to leave out data that a large AI model wouldn't typically learn, like the outcomes of specific sports games. However, there's a notable challenge with this method: after the model grows beyond 7 billion parameters, improvements are hard to come by. The 14 billion parameter version does not show significant improvements over the 7 billion one. This suggests two possibilities: either the model has already learned everything it can from the data provided, or it has reached its limit in terms of learning from the benchmarks included in its training set.

Current Release and Initial Performance

Only the smallest version, the Phi-3 mini, has been released so far. It recently went head-to-head with other models in the ChatBot Arena, a famous platform where all the AI model are compared. While it didn't outperform the top model LLaMa 3 8B, it held its own and matched up well against the Mistral 7B, proving its competitiveness.

ChatBot Arena Phi-3

Efficiency and Future Potential

In a performance test, Phi-3 showed it could process inputs incredibly fast on an M3 Max chip in fp16, demonstrating its efficiency and speed. This indicates that while the model is already fast, there might still be potential to optimize its performance even further.

Phi-3 has sparked a lot of interest and discussion within the AI community about its capabilities, its unique approach to learning, and its potential applications in various fields. Be patient, more much stronger versions of Phi-3 models are to come!

How to use Phi-3

You can try Phi-3 Mini model on HuggingFace, and it's extremely easy.

  1. Just go to HuggingFace Chatbot.

  2. Sign up.

  3. Pick microsoft/Phi-3-mini-4k-instruct in the model list and start chatting!

HuggingChat Phi-3


Will Phi-3 Be Added to ChatLabs AI?

At ChatLabs, we're known for offering a wide variety of AI models. Our lineup includes more than 30 different models such as GPT-4, Gemini Pro 1.5, Groq, Llama 3, Claude 3 Opus, Mistral, and many others. We're always looking to expand our collection with the latest and most impressive AI technologies. Our team works fast, usually adding new large language models (LLMs) to our platform within just 1-2 days, which is quicker than most of our competitors.

The Microsoft Phi-3 model will soon be part of our platform. We invite you to keep an eye out for updates and be ready to try out the newest AI technologies that ChatLabs has to offer.


Useful Links

Technical Report on Arxiv.org
Phi-3 Mini 4k model
Phi-3 Mini 128k model

Apr 26, 2024

What is the Recently Released Microsoft Phi-3 AI Model and How to Get Access to It

Microsoft has introduced a new AI model called Phi-3. This model can process text quickly and works well even on iPhones, handling 12 tokens every second. In this short article, we shed the light on details about Phi-3, as well as show a simple way of how new Microsoft's model can be used.

Microsoft Phi-3

New AI Model by Microsoft

Microsoft has launched a new AI called Phi-3, which is noteworthy for its ability to run smoothly on iPhones, processing text quickly at 12 tokens per second. This model is drawing attention because of its strong performance. The Phi-3 mini version competes closely with larger models like LLaMa 3 8B, despite being smaller. The medium Phi-3 is also performing well, comparable to the Mistral 7B.

How Phi-3 Is Trained

Unlike earlier Phi models which used mostly synthetic data, Phi-3 includes a lot of data from the internet. Microsoft trains it in two stages: initially, it learns from highly filtered data, and then it learns from a mix of synthetic and even more selectively filtered internet data. This two-step process helps the model learn more effectively and think more deeply.

Challenges with Size and Learning

One interesting approach by the developers is the Data Optimal Regime, where they try to leave out data that a large AI model wouldn't typically learn, like the outcomes of specific sports games. However, there's a notable challenge with this method: after the model grows beyond 7 billion parameters, improvements are hard to come by. The 14 billion parameter version does not show significant improvements over the 7 billion one. This suggests two possibilities: either the model has already learned everything it can from the data provided, or it has reached its limit in terms of learning from the benchmarks included in its training set.

Current Release and Initial Performance

Only the smallest version, the Phi-3 mini, has been released so far. It recently went head-to-head with other models in the ChatBot Arena, a famous platform where all the AI model are compared. While it didn't outperform the top model LLaMa 3 8B, it held its own and matched up well against the Mistral 7B, proving its competitiveness.

ChatBot Arena Phi-3

Efficiency and Future Potential

In a performance test, Phi-3 showed it could process inputs incredibly fast on an M3 Max chip in fp16, demonstrating its efficiency and speed. This indicates that while the model is already fast, there might still be potential to optimize its performance even further.

Phi-3 has sparked a lot of interest and discussion within the AI community about its capabilities, its unique approach to learning, and its potential applications in various fields. Be patient, more much stronger versions of Phi-3 models are to come!

How to use Phi-3

You can try Phi-3 Mini model on HuggingFace, and it's extremely easy.

  1. Just go to HuggingFace Chatbot.

  2. Sign up.

  3. Pick microsoft/Phi-3-mini-4k-instruct in the model list and start chatting!

HuggingChat Phi-3


Will Phi-3 Be Added to ChatLabs AI?

At ChatLabs, we're known for offering a wide variety of AI models. Our lineup includes more than 30 different models such as GPT-4, Gemini Pro 1.5, Groq, Llama 3, Claude 3 Opus, Mistral, and many others. We're always looking to expand our collection with the latest and most impressive AI technologies. Our team works fast, usually adding new large language models (LLMs) to our platform within just 1-2 days, which is quicker than most of our competitors.

The Microsoft Phi-3 model will soon be part of our platform. We invite you to keep an eye out for updates and be ready to try out the newest AI technologies that ChatLabs has to offer.


Useful Links

Technical Report on Arxiv.org
Phi-3 Mini 4k model
Phi-3 Mini 128k model

Apr 26, 2024

What is the Recently Released Microsoft Phi-3 AI Model and How to Get Access to It

Microsoft has introduced a new AI model called Phi-3. This model can process text quickly and works well even on iPhones, handling 12 tokens every second. In this short article, we shed the light on details about Phi-3, as well as show a simple way of how new Microsoft's model can be used.

Microsoft Phi-3

New AI Model by Microsoft

Microsoft has launched a new AI called Phi-3, which is noteworthy for its ability to run smoothly on iPhones, processing text quickly at 12 tokens per second. This model is drawing attention because of its strong performance. The Phi-3 mini version competes closely with larger models like LLaMa 3 8B, despite being smaller. The medium Phi-3 is also performing well, comparable to the Mistral 7B.

How Phi-3 Is Trained

Unlike earlier Phi models which used mostly synthetic data, Phi-3 includes a lot of data from the internet. Microsoft trains it in two stages: initially, it learns from highly filtered data, and then it learns from a mix of synthetic and even more selectively filtered internet data. This two-step process helps the model learn more effectively and think more deeply.

Challenges with Size and Learning

One interesting approach by the developers is the Data Optimal Regime, where they try to leave out data that a large AI model wouldn't typically learn, like the outcomes of specific sports games. However, there's a notable challenge with this method: after the model grows beyond 7 billion parameters, improvements are hard to come by. The 14 billion parameter version does not show significant improvements over the 7 billion one. This suggests two possibilities: either the model has already learned everything it can from the data provided, or it has reached its limit in terms of learning from the benchmarks included in its training set.

Current Release and Initial Performance

Only the smallest version, the Phi-3 mini, has been released so far. It recently went head-to-head with other models in the ChatBot Arena, a famous platform where all the AI model are compared. While it didn't outperform the top model LLaMa 3 8B, it held its own and matched up well against the Mistral 7B, proving its competitiveness.

ChatBot Arena Phi-3

Efficiency and Future Potential

In a performance test, Phi-3 showed it could process inputs incredibly fast on an M3 Max chip in fp16, demonstrating its efficiency and speed. This indicates that while the model is already fast, there might still be potential to optimize its performance even further.

Phi-3 has sparked a lot of interest and discussion within the AI community about its capabilities, its unique approach to learning, and its potential applications in various fields. Be patient, more much stronger versions of Phi-3 models are to come!

How to use Phi-3

You can try Phi-3 Mini model on HuggingFace, and it's extremely easy.

  1. Just go to HuggingFace Chatbot.

  2. Sign up.

  3. Pick microsoft/Phi-3-mini-4k-instruct in the model list and start chatting!

HuggingChat Phi-3


Will Phi-3 Be Added to ChatLabs AI?

At ChatLabs, we're known for offering a wide variety of AI models. Our lineup includes more than 30 different models such as GPT-4, Gemini Pro 1.5, Groq, Llama 3, Claude 3 Opus, Mistral, and many others. We're always looking to expand our collection with the latest and most impressive AI technologies. Our team works fast, usually adding new large language models (LLMs) to our platform within just 1-2 days, which is quicker than most of our competitors.

The Microsoft Phi-3 model will soon be part of our platform. We invite you to keep an eye out for updates and be ready to try out the newest AI technologies that ChatLabs has to offer.


Useful Links

Technical Report on Arxiv.org
Phi-3 Mini 4k model
Phi-3 Mini 128k model

Sign up just in one minute.

Sign up just in one minute

Sign up just in one minute