Microsoft recently demonstrated how Microsoft 365 Copilot works by leveraging large language models that interact with your organizational data. Copilot transforms how we work by providing intelligent suggestions, summaries, and content generation within our everyday workflows. But how exactly does it work while also respecting privacy and security? Let’s break it down.
Where LLMs Get Their Knowledge
- Large language models (LLMs) are trained on massive public datasets:
- Books, articles, websites
- Learn language, context, meaning
- You interact with an LLM using a prompt – a statement or question
- The LLM generates a response based on its training and the context from your prompt
- As you chat, the conversation provides more context so the LLM can stay relevant
- The chat history is wiped after each conversation
Providing Context to an LLM
To illustrate how providing context in a prompt works:
- Asked Microsoft Bing Chat (powered by GPT) what color shirt I’m wearing without any context
- It responded that it can’t see me to know
- Asked again and described my outfit in the prompt
- It then responded using the context I provided
- In a new chat, asked again what color shirt I’m wearing
- It responded the same as the first time, showing the context doesn’t persist
How Microsoft 365 Copilot Works
Copilot has several core components:
- Large language models hosted in the Microsoft Cloud via Azure OpenAI
- Powerful orchestration engine
- Integrated into Microsoft 365 apps
- Leverages Microsoft Search for information retrieval
- Uses Microsoft Graph for organizational data and relationships
- Respects per user access permissions to content and Graph data
For example, in Teams:
- User asked: “Did anything happen yesterday with Fabrikam?”
- Copilot orchestrator searched user’s accessible data for relevant context:
- Email from Mona
- Project files user had access to
- Sharing notifications for contract review
- Copilot combined this context into a prompt for the LLM
- LLM generated a response summarizing the Fabrikam activities
- Copilot cited each data source for transparency
Generating New Content
Copilot can also use your data to help generate new content, like proposals:
- LLM is trained on proposal document structure and language
- Copilot orchestrator retrieves relevant content from documents you select
- This context is added to the LLM prompt
- LLM generates a draft proposal leveraging your existing data
- Generated content is a prompt response – not retained or used to train the LLM
- All data retrieval respects user permissions
Maintaining Privacy and Security
- Prompts to LLM with organizational data provide context but are not retained
- LLM responses are not used to train the foundation models
- All data retrieval follows user access permissions
- Learn more about Microsoft’s responsible AI principles
Hope this breakdown demystifies how Copilot taps into large language models and your data while maintaining security and privacy. Stay tuned for more Copilot updates!
Boost Productivity with Copilot and DBGM
As an IT services firm focused on digital transformation, DBGM Consulting can help you prepare for Microsoft 365 Copilot. Our AI experts can:
- Audit data sources Copilot will leverage
- Refine content permissions and governance
- Validate technical readiness
- Develop change management plans
- Provide user training
Contact us to maximize Copilot’s productivity benefits while safeguarding privacy and security. DBGM Consulting has the expertise to implement Microsoft’s latest innovations.