Published on 00/00/0000
Last updated on 00/00/0000
Published on 00/00/0000
Last updated on 00/00/0000
Share
Share
11 min read
Share
In our first blog post, Introducing BLAZE (hyperlinked), Cisco Research proudly presented BLAZE – Build Language Applications Easily, a flexible, standardized, no-code, open-source platform to easily assemble, modify, and deploy various NLP models, datasets, and components. In our second blog post, Building with BLAZE (hyperlinked), we walked through creating an NLP pipeline using Meeting Transcripts, BERT, BART, GPT, and the Webex Chatbot Interface. In our third blog post, we will harness BLAZE to build a live meeting assistant for Webex. We’ll show how BLAZE makes the creation of NLP solutions simple, efficient, and flexible — as easy as Build, Execute, Interact, or 1-2-3!
To install BLAZE, we can follow the instructions in the README on BLAZE's GitHub page, shown below:
(Cisco Open Source's GitHub - BLAZE Repo) https://github.com/cisco-open/Blaze#installation
There are many options to install BLAZE, including, but not limited to Docker, PyEnv, Conda, and Shell.
Once the installation is ready to go, we can get started with creating our Webex Embedded App!
Cisco Webex is a powerful platform to foster efficient digital collaboration. In addition to their Chat platform (detailed in Blog Post #2), their Meetings platform allows users to video-conference with access to several intelligent tools. One of these tools is Webex’s Meeting Assistant, which automatically records, transcribes, and stores live transcripts for each Webex meeting.
What if we can utilize this live meeting transcript? By creating a Webex Embedded App, we can integrate BLAZE's NLP pipeline-hosting functionality to provide LLM-powered insights to meeting attendees!
In the following sections, we'll walk through the process of creating our own Webex Embedded App, connecting our App with BLAZE, and utilizing our solution within a Webex meeting 🥳.
To create a Webex embedded app, we will need to host our solution's interface (a start page URL). Our solution will consist of an .html
and a .js
file, which will call the endpoints provided by BLAZE's REST API and display meaningful insights directly within a Webex meeting!
We'll first develop our .html
start page, create the accompanying .js
file, and then host both files.
We will start by creating the HTML start page for our embedded app. This page loads when a user accesses our app from the Apps tray within Webex during meetings or in spaces.
We can copy the following code into a new index.html
file:
<html lang="en">
<head>
<meta charset="UTF-8">
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.1/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-4bw+/aepP/YC94hEpVNVgiZdgIC5+VKNBQNGCHeKRQN+PtmoHDEXuppvnDJzQIu9" crossorigin="anonymous">
<title style="text-align:center;">WebEx NLP Plugin</title>
<script src="https://binaries.webex.com/static-content-pipeline/webex-embedded-app/v1/webex-embedded-app-sdk.js" defer></script>
<script crossorigin src="https://unpkg.com/webex@^1/umd/webex.min.js"></script>
<!-- <script src="./index.js" defer></script> -->
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bootstrap@4.0.0/dist/css/bootstrap.min.css" integrity="sha384-Gn5384xqQ1aoWXA+058RXPxPg6fy4IWvTNh0E263XmFcJlSAwiGgFAW/dAiS6JXm" crossorigin="anonymous">
<link href='https://fonts.googleapis.com/css?family=Quicksand' rel='stylesheet'>
</head>
<body>
<body style="background-color:black; padding: 1%; font-family: Quicksand">
<!-- Header -->
<div style="color:whitesmoke; justify-content:center;align-items: center;text-align: center; font-family:Quicksand">
<h1>BLAZE - WebEx Plugin</h1>
<p style="font-style:italic">Research edition of WebEx Plugin, powered by BLAZE. </p>
</div>
<!-- Summary -->
<div class="card text-white bg-dark mb-3 border-info" style="width: 100%; padding: 2%; color:whitesmoke">
<div class="card-body">
<h5 class="card-title" >Meeting Summary </h5>
<h6 class="card-subtitle mb-2 text-muted">A live summary of the meeting, powered by BART.</h6>
<div class="card-text" id="summaryContainer"> </div>
</div>
</div>
<!-- Topic Discussions -->
<div class="card text-white bg-dark mb-3 border-info" style="width: 100%; padding: 2%; color:whitesmoke">
<div class="card-body">
<h5 class="card-title" >Topic Discussions </h5>
<h6 class="card-subtitle mb-2 text-muted">A timestamped list of points, powered by GPT-3.</h6>
<div class="card-text" id="timeContainer"> </div>
</div>
</div>
<!-- Actionable Items -->
<div class="card text-white bg-dark mb-3 border-info" style="width: 100%; padding: 2%; color:whitesmoke">
<div class="card-body">
<h5 class="card-title" >Actionables (Todo's) </h5>
<h6 class="card-subtitle mb-2 text-muted">A list of actionable items, powered by GPT-3.</h6>
<div class="card-text" id="actionablesContainer"> </div>
</div>
</div>
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.1/dist/js/bootstrap.bundle.min.js" integrity="sha384-HwwvtgBNo3bZJJLYd8oVXjrBZt8cqVSpeBNS5n7C8IVInixGAoxmnlMuBnhbgrkm" crossorigin="anonymous"></script>
<!-- app.js is your application code -->
<script src="index.js"></script>
</body>
</html>
Here, we can see various elements, including cards for each of the summary, agenda, and actionables that will be displayed. Furthermore, we can see the script tags for loading the Embedded Apps JavaScript library and our app's index.js
file, which will interface directly with BLAZE. Let's move on to that!
We can create a new file called index.js
(in the same location as index.html
) and paste in this code:
let webex;
let receiveTranscriptionOption = true;
let transcript_final_result = {"transcript":""};
let meetings;
let current_meeting;
function summary() {
// WARNING: For POST requests, body is set to null by browsers.
console.log(transcript_final_result["transcript"])
var data = JSON.stringify({
"module_name": "openai",
"method_type": "module_function",
"method_name": "process_transcript",
"args": [
transcript_final_result["transcript"]
]
});
var xhr = new XMLHttpRequest();
xhr.withCredentials = false;
xhr.addEventListener("readystatechange", function() {
if(this.readyState === 4) {
response = JSON.parse(this.responseText)
console.log(response);
let summary = response["result"]["summary"]
let summaryContainer = document.getElementById('summaryContainer')
summaryContainer.innerHTML = `<div>${summary}</div>`
let actionables = response["result"]["actionables"]
let actionablesContainer = document.getElementById('actionablesContainer')
actionablesContainer.innerHTML = `<div>${actionables}</div>`
let time = response["result"]["agenda"]
let timeContainer = document.getElementById('timeContainer')
timeContainer.innerHTML = `<div>${time}</div>`
}
});
xhr.open("POST", "http://127.0.0.1:3000/dynamic_query");
xhr.setRequestHeader("Content-Type", "application/json");
xhr.setRequestHeader('Access-Control-Allow-Origin','*');
xhr.send(data);
}
webex = window.webex = Webex.init({
config: {
logger: {
level: "debug",
},
meetings: {
reconnection: {
enabled: true,
},
enableRtx: true,
experimental: {
enableUnifiedMeetings: true,
},
},
// Any other sdk config we need
},
credentials: {
access_token:
"<YOUR ACCESS TOKEN>",
},
});
webex.once("ready", () => {
console.log("Authentication#initWebex() :: Webex Ready");
});
webex.meetings.register().then(() => {
console.log("successful registered");
webex.meetings
.syncMeetings()
.then(
() =>
new Promise((resolve) => {
setTimeout(() => resolve(), 3000);
})
)
.then(() => {
console.log(
"MeetingsManagement#collectMeetings() :: successfully collected meetings"
);
meetings = webex.meetings.getAllMeetings();
if (webex.meetings.registered) {
console.log(meetings);
current_meeting = meetings[Object.keys(meetings)[0]];
console.log(current_meeting);
current_meeting.on(
"meeting:receiveTranscription:started",
(payload) => {
if (payload["type"]=="transcript_final_result"){
transcript_final_result["transcript"] = transcript_final_result["transcript"] + ", " + payload["transcription"];
}
console.log(transcript_final_result)
}
);
}
const joinOptions = {
moveToResource: false,
resourceId: webex.devicemanager._pairedDevice
? webex.devicemanager._pairedDevice.identity.id
: undefined,
receiveTranscription: receiveTranscriptionOption,
};
current_meeting.join(joinOptions);
});
});
const intervalID = setInterval(summary, 100000);
This code initializes a Webex app instance, waits for the onReady()
promise to fulfill, and handles setting the share URL, among other functionalities. We can also see our .js
file sending requests to BLAZE's REST API server to retrieve the live meeting summary, agenda-tracking, and actionable extraction.
Pretty neat! Now, let's host our interface, allowing Webex to utilize it as an Embedded Meetings App!
To publish both of these files, we can use a variety of free web-host services.
One such provider is GitHub Pages. We can create a new repository, upload index.html
and index.js
, and then voila, we'll be good to go! GitHub provides an excellent resource for setting this up:
(GitHub Pages: Websites for you and your projects) https://pages.github.com/
Make sure to select "Project Site" below the Homepage banner!
Once your code has been hosted, make sure to save the link to the index.html
file, namely something along the lines of "https://<your-github-username>.github.io/<repo-name>/index.html " or "https://www.example.com/index.html ".
Accessing this link should lead you to a page as shown:
So far, so good! Let's utilize this hosted interface to create our Webex Embedded App!
To create a new Webex App, we can follow a few simple steps:
index.html
and the overall hosted domain itself (essentially the same URL without the /index.html
at the end!)https://
or http://
should be omitted!)index.html
And voila, we have our App deployed and ready to go! Let's open Webex and ensure it is there.
To do this, we can start a Personal Room Meeting in Webex and select "Apps" on the bottom right.
From here, we'll see our apps in the "My Apps" section, under which should be our newly-created app!
Perfect! Now, we have our Webex Embedded App ready to go. We need to specify our meeting-assistant pipeline via BLAZE and launch our REST API backend. Then, we can give it a spin!
Our journey continues with BLAZE, which operates in three steps: Build, Execute, and Interact.
In this case, much of our pipeline will be similar to the one utilized while building our Webex Bot (Blog Post #2). We can utilize the same .yaml
file and REST API server as last time!
Some of the functionality (namely the agenda-extraction) will be present in this solution, but was not present in our bot! Curious as to how this works? This is because of BLAZE's use of Dynamic APIs with the
module: ['openai']
is specified in the.yaml
file. Our next blog post will take a deeper dive into the architecture and implementation of BLAZE, explaining how this works!
We can refer back to Blog Post #2's "Building the Bot" section and follow those steps, namely:
bash run.sh server yaml=yaml/05_search_summary_webex.yaml
And voila, our frontend has been deployed, and our backend is now up and running!
To utilize our solution, we can follow these three steps:
Here's an example of the app in action! If we accidentally zone out or have to step out for a minute, our solution helps us quickly understand and track meeting highlights!
In this blog post, we utilized BLAZE to build a Webex app that analyzes and displays insights, including summaries, identifying actionable items, and creating a dynamic agenda from a live meeting.
With the flexibility of BLAZE, we can easily add more functionalities and modify existing ones. In our ensuing BLAZE post, we will examine several ways to add user-defined prompts and new components to our Webex bot and Webex app. Feel free to play around with BLAZE's models and prompts, and share the excellent functionalities you come up with for your personal meeting assistants!
Thank you so much for following along; we hope you enjoyed it! 😄
Here is a roadmap of our past, present, and future blog posts with BLAZE.
Next time, we'll switch gears and look at how BLAZE was built, understanding the versatile platform through its ABCs (Architecture, Blocks, and Configurability). Stay tuned till then! 🥳
To learn more about BLAZE, please review our GitHub repository and documentation.
If you have any questions, want to share your thoughts, or wish to contribute, please get in touch with our Group Mailing List (blaze-github-owners@cisco.com) — we'd love to hear from you!
A lot of work went into building BLAZE — our team started this project last summer, and we have been meticulously working to ensure a seamless experience for users wishing to harness the promise of NLP.
In addition to those acknowledged in our first blog post, we wanted to express our special thanks to:
Get emerging insights on innovative technology straight to your inbox.
Discover how AI assistants can revolutionize your business, from automating routine tasks and improving employee productivity to delivering personalized customer experiences and bridging the AI skills gap.
The Shift is Outshift’s exclusive newsletter.
The latest news and updates on generative AI, quantum computing, and other groundbreaking innovations shaping the future of technology.