Zoom AI Companion
You can expect to see the tool throughout the platform. It means that you can see it in Team Chat, Phone, Email, Meetings and Whiteboard. Not all features will launch immediately.
Zoom is known for its video chatting feature. One of the AI Companion’s capabilities will let you catch up if you are late to a meeting. You can ask questions through a side panel.
After your meeting, you can get the recordings with highlights. It also provides smart chapters and can generate summaries of the meetings automatically. But the host needs to enable these features first.
As mentioned, not all new features will be available immediately. The “real-time feedback” will be coming in spring. The tool can also teach you to improve your conversational and presenting skills.
In the coming weeks, you can use generative AI summarization for Team Chat. It will help you catch up on a long chat thread.
Next year, the tool will let you auto-complete sentences and schedule meetings from a chat.
Large Language Model
The company’s approach to AI depends on its own large language model and Llama 2, OpenAI, and Anthropic. With this kind of approach, AI Companion can incorporate new features quickly from various models.
Last month, Zoom endured public backlash over how it is going to train its AI model. On the terms of service, the company stated that it could use customer content. It means that it could use customers’ audio, video, chat, attachments, etc. to train its AI models.
But it was later retracted. The company updated the terms. It stated that it has no legal rights to utilize customer content to train its artificial models.
However, the company did not comment on what data sources it will use to train its AI.
Training AI models often involves using large datasets that may contain sensitive or personally identifiable information. If companies are not careful when handling and protecting these data, their platforms will experience privacy breaches or misuse.
Many AI models are black boxes. It means that they can be challenged to understand how they arrive at their decisions or predictions. This lack of transparency can be problematic. This is especially true in critical applications like healthcare or criminal justice, where accountability is crucial.
AI training data can sometimes be obtained unethically or without the knowledge or consent of individuals. For instance, scraping data from Zoom chat without permission or using data from Zoom audio calls without consent can raise ethical concerns.
Controversies created about AI and data can erode public trust in technology companies. When people perceive that AI systems are biased, unethical, or unaccountable, it can lead to a lack of trust in these technologies and companies that develop them.
Because Zoom did not state the sources it will get from to train the system, you might wonder what data is actually being used.