Used to check for browser translation.
用于检测浏览器翻译。
ブラウザの翻訳を検出する

Use self-hosted models with Ollama in AIGNE Studio

Yechao
Aug 10, 2024 · edited
B
Blogs
cover

AIGNE now supports users to access their own running Ollama service 🎉. This access method is highly scalable, not only applicable to Ollama, but also supports other LLM providers or open-source LLM services run by users themselves. Next, let's explore the features of AIGNE's LLM adapter together!

What is an LLM adapter?#

The LLM adapter is a feature that enhances the capabilities of AIGNE. With the LLM adapter, developers can easily integrate any large language model (LLM) into AIGNE. As described in this article, you can run an LLM service on your own and use it in AIGNE to replace the built-in AI service.

What is Ollama?#

Ollama is a language generation and natural language processing platform focused on AI. Through its advanced models and technology, Ollama can help users generate high-quality, personalized text content for writing, communication, or data analysis. The platform is known for its user-friendly interface and powerful features, supporting multiple languages and application scenarios, suitable for content creation and language processing needs in various industries. Ollama not only provides automated solutions but also continuously optimizes and improves generated content through deep learning technology to ensure accuracy and innovation.

Access Ollama in AINGE 🚀#

Install and launch Ollama#

Refer to the official Ollama documentation, install and start the Ollama application on your computer (Note that running Ollama requires GPU hardware support, please refer to the official documentation for details)

Download Model#

Run the following command in the system Terminal to download the llama3.1 model, or you can also download any other model https://ollama.com/library

ollama pull llama3.1

Connect Ollama to AIGNE#

Now, you are ready to unleash unlimited creativity in AIGNE! Just open the AIGNE project, create an Agent, choose Ollama in the LLM Provider, and configure the corresponding parameters:

  • Model: Enter the name of the model you want to use (must be an installed model, refer to the model download steps above).
  • URL: fill in http://127.0.0.1:11434/api/chat (replace the IP address with the IP address of your computer running the Ollama service).
Note: Ensure that the server running AIGNE and the server running Ollama are on the same network (able to access each other). If you are running Ollama on a local machine, then you can only use the locally running AIGNE to connect.


After completing these settings, you can freely use your own LLM service in AIGNE. Release your creativity and explore unlimited possibilities! 🎉

Conclusion#

We are committed to continuously updating and expanding the support for various LLM services in AIGNE, providing users with more choices. If you have LLM services that you want to integrate, please contribute your ideas, and we will fully support you 💪. In addition, we sincerely invite users with technical capabilities to develop and submit their own LLM Adapters, publish them in our Blocklet Store, and provide more services to the community 👏! Your creativity and contributions will help us build a more diverse and powerful ecosystem together.


2.0.154