Leveraging AI to enhance open-source collaboration with WAISE

31 May 2024 5 min read

Written by

The XWiki Team

In this article, we're thrilled to share highlights from our interview with the Next Generation Internet (NGI) Community about one of our latest and impactful projects, WAISE (Wiki AI Search Engine). We are developing this innovative tool as part of the NGI Search initiative, and we can't wait to tell you all about it!

At XWiki, we have always been passionate about empowering collaboration and knowledge sharing through our open-source wiki platform. With WAISE, we are taking this mission to the next level by leveraging the power of AI and large language models (LLMs).

Introducing WAISE

With the WAISE project, we are building an application server that provides a chatbot powered by Large Language Models (LLMs). The key innovation is that this chatbot can be integrated into any application to answer questions based on the content and data in that application, while respecting the permissions of the current user.

In developing WAISE, we are taking full advantage of XWiki's powerful features. For example, we are using the App Within Minutes functionality to rapidly prototype and iterate on the WAISE application directly from the wiki interface. This allows us to quickly build out the core components like the user interfaces for managing language models or collections of documents.

We are also leveraging XWiki's support for structured data and integration with Solr 9 to efficiently index and query the content that powers WAISE's AI-assisted search. The advanced permission system ensures that WAISE respects user access controls when retrieving information.

By building on top of XWiki, we get a robust and flexible foundation for WAISE that allows us to focus on the AI and search innovation.

Highlights of the WAISE architecture

  • Indexing of content from external applications via a REST API
  • Retrieval of relevant context to augment the user's question
  • Authentication support to embed the chat securely in other apps
  • Planned integration with XWiki itself to directly index wiki content


WAISE doesn't run a large language model (LLM) itself. Instead, it works with any server that uses an OpenAI-compatible API. You can use the LocalAI open-source project to run LLMs on your own servers. We're also looking at other options like vllm, which might be better for multiple users and more efficient. You'll need to install the server with the LLM separately, or you can use a provider that offers an OpenAI-compatible API.

To test out the beta version on your wiki, you can do so through the Extension Manager. Go to Administer wiki > Extensions panel > Extensions and look for the LLM Application starting with version 0.3.

Milestones achieved so far

  • ✅ Implemented support for indexing content from external applications using a REST API
  • ✅ Developed retrieval capabilities to find relevant context for a user's question
  • ✅ Added authentication mechanisms to securely embed the WAISE chat in other applications

Impact of the NGI Search funding

The NGI Search funding has been instrumental in allowing us to dedicate development resources to the WAISE project. Without this support, we would not have been able to progress as quickly and comprehensively. I helped us focus on privacy-preserving, trustworthy search and discovery.

Future goals

Looking ahead, we have some exciting plans for WAISE:

  • Implement an integration with an external open-source application like OpenProject to showcase the power of the WAISE chatbot in enhancing existing tools
  • Deeply integrate WAISE with XWiki itself, indexing wiki content directly and allowing users to explicitly reference wiki pages as context for their questions
  • Continue to refine and optimize the LLM prompting and retrieval to provide the most relevant and accurate answers

You may also be interested in: