Published on 00/00/0000
Last updated on 00/00/0000
Published on 00/00/0000
Last updated on 00/00/0000
Share
Share
INSIGHTS
7 min read
Share
Cisco Research proudly presents BLAZE - Build Language Applications Easily, a flexible, standardized, no-code, open-source platform to easily assemble, modify, and deploy various NLP models, datasets, and components. How can it fit into your enterprise technology solutions? Here’s what you’ll need to know.
The landscape of Natural Language Processing (NLP) models in enterprise technology has seen a remarkable evolution over the years, morphing from single-task parsers to Large Language Models (LLMs) capable of achieving state-of-the-art scores across various benchmarks. Despite these accelerating advances in AI business solutions, implementing this research for actual usage isn't always straightforward.
Enterprises and researchers are inundated with new techniques, datasets, and models. With each research paper comes exciting work yet varying standards for implementing their novel methodologies. Furthermore, we are yet to see the use of open-source LLMs and other models deployed as solutions.
Even though NLP holds significant potential for enterprise applications, its adoption in existing applications is sparse. Creating an NLP pipeline or integrating services into an existing solution is complex, requiring extensive knowledge and trial of models, datasets, and processing techniques. The rapid acceleration of this field, with components evolving each day, also demands meticulous fine-tuning.
Our team carefully identified several impediments in the implementation side of the NLP world. Over the last several months, we created a scalable, open-source solution to these challenges: the BLAZE library.
BLAZE is designed to streamline the integration of Natural Language Pipelines into current software solutions. Our role is to expedite the journey from idea to prototype, offering an open, extensible framework to benchmark existing solutions and compare them with novel ones before a production shift.
The building blocks of BLAZE are flexible blocks of the NLP pipeline. The functionality of different stages for the pipeline are abstracted out, to create flexible, Lego-like blocks that can be combined in various ways.
Enterprises and researchers can add and arrange these building blocks to create new recipes of varying NLP pipelines. With our benchmarking and model comparison features, they can easily test new state-of-the-art (SOTA) research against existing models and processing techniques. Upon creating a pipeline recipe, they can deploy and interact with their solution through various UIs at the simple press of a button.
BLAZE will help democratize NLP applications, providing a no-code solution to experiment with SOTA research and serving as a framework to implement NLP pipeline recipes into usable solutions.
The BLAZE framework operates in three stages: Build, Execute, and Interact
In the Build stage, users specify the models, data, and processing components of their pipeline using a YAML format. They can create a fresh “recipe” via a block-based drag-and-drop UI, modify a pre-existing recipe, or use one directly out of the box. This YAML file contains the specifications of their custom pipeline.
For example, a user could use the drag-and-drop interface to create a solution that uses ElasticSearch and BERT for Semantic Search and BART for Summarization of uploaded documents.
Upon completing the drag-and-drop step, users can examine their generated YAML recipes. For example, here we can examine what the generated YAML recipe looks like for a virtual meeting assistant.
Here, the specified functions are summarization, agenda extraction, and actionables identification, the specified UI is WebEx Meetings App, the input data is specified as the live meeting transcript, and the models are Bart (Summarization) and GPT variants (Agenda, Actionables). We'll learn how to build new solutions in a future blog post about Blaze!
In the Execute stage, BLAZE utilizes the YAML file generated or chosen in the preceding stage to establish a server, hosting the appropriate models, datasets, and components as specified. This server serves as the heart of the pipeline, allowing users to interact with their specified configuration of components to run their task.
The following diagram represents the architecture, illustrating how the server enables pipeline functionality.
In the Interact stage, users can choose to interact with their pipeline through a number of pre-built interfaces, or directly access each functionality through REST API services. Our current offering of interfaces include:
All of these interfaces are automatically generated and are specific to the user’s pipeline.
As an example, the following UI displays the WebApp interface for a user who wished to benchmark their new model, a combination of ElasticSearch and BERT on Stanford’s Question-Answering (SQUAD) dataset.
Similar examples of the WebApp interface, but with search and summarization tasks are shown as well.
Another example shows the WebEx Meeting App interface, serving as the UI for a user who wished to summarize, extract agenda items, and identify actionables from a live WebEx meeting transcript.
A complementary example shows the same pipeline, but now with a conversational AI WebEx chatbot interface instead. This was achieved simply by replacing “WebEx-Plugin” with “WebEx-ChatBot.”
You can modify, deploy, and utilize their NLP pipeline in a few minutes. That’s the power of BLAZE.
Designed for flexibility and extensibility, BLAZE offers a comprehensive enterprise technology solution for building NLP applications. Users can effortlessly add new models or components to the pipeline, integrating their existing models into the BLAZE framework and sharing 'recipes' (YAML files) to the community for others to use.
It democratizes the rapidly evolving NLP landscape, acting as a standardized, universal breadboard to combine and build various SOTA NLP tools. As research continues to flourish, users can easily upload new components as well, leveraging the modularized design of BLAZE to access the most cutting-edge tools.
The above examples illustrate only a small fraction of what BLAZE can offer.
This platform can harmoniously blend various functionalities with different models. The framework empowers developers to create, share, and deploy custom pipelines in a standardized manner with no code. With our open-source commitment, BLAZE will help democratize NLP, making this frontier accessible to all.
In the coming weeks, we’ll demonstrate the creation of several applications, walking-through how to set-up, create, and deploy your own NLP solutions via BLAZE. Here’s a sneak peek into the next few weeks:
Stay tuned to the Outshift blog for updates!
To learn more about BLAZE, please take a look at our GitHub repository and documentation.
If you have any questions, want to share your thoughts, or wish to contribute, feel free to reach out to our Group Mailing List (blaze-github-owners@cisco.com) — we’d love to hear from you! And if you want to learn more, check out our blog post on Building with BLAZE.
A lot of work went into building BLAZE — our team started this project last summer, and we have been meticulously working to ensure a seamless experience for users wishing to harness the promise of NLP.
BLAZE couldn’t have been possible without the contributions of the following individuals, to whom we express our sincere thanks:
Get emerging insights on innovative technology straight to your inbox.
Discover how AI assistants can revolutionize your business, from automating routine tasks and improving employee productivity to delivering personalized customer experiences and bridging the AI skills gap.
The Shift is Outshift’s exclusive newsletter.
The latest news and updates on generative AI, quantum computing, and other groundbreaking innovations shaping the future of technology.