46,99 €
Skillfully deploy Microsoft SharePoint Premium to automate your organization's document processing and management In Microsoft SharePoint Premium in the Real World: Bringing Practical Cloud AI to Content Management, a team of veteran Microsoft AI consultants delivers an insightful and easy-to-follow exploration of how to apply SharePoint's content AI and advanced machine learning capabilities to your firm's document processing automation project. Using a simple, low-code/no-code approach, the authors explain how you can find, organize, and classify the documents in your SharePoint libraries. You'll learn to use Microsoft SharePoint Premium to automate forms processing, document understanding, image processing, content assembly, and metadata search. Readers will also find: * Strategies for using both custom and pre-built, "off-the-rack" models to build your solutions * The information you need to understand the Azure Cognitive Services ecosystem more fully and how you can use it to build custom tools for your organization * Examples of solutions that will allow you to avoid the manual processing of thousands of your own documents and files An essential and hands-on resource for information managers, Microsoft SharePoint Premium in the Real World is a powerful tool for developers and non-developers alike.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 511
Veröffentlichungsjahr: 2024
Cover
Table of Contents
Title Page
Foreword—SharePoint Premium
First Wave—Cloud Beachhead (2012–2017)
Second Wave—Cloud Innovation (2017–2022)
Third Wave—Cloud AI (2022–present)
Information Worker Value
Developer Value
IT Pros and Admins
Conclusion
Introduction
Who This Book Is For
What This Book Covers
How This Book Is Structured
What You Need to Use This Book
CHAPTER 1: Artificial Intelligence
What AI Is—And What It Is Not
Forms of AI
Intelligence in Search of Wisdom
Responsible AI
In Conclusion
CHAPTER 2: Information Management
What's in This Chapter
Introducing Information Management
Information Management in Microsoft 365
CHAPTER 3: Azure Cognitive Services Landscape
What's in This Chapter
What Are Azure Cognitive Services?
Why Is an Understanding of Cognitive Services Related to an Understanding of Microsoft SharePoint Premium?
What Are the Different Services under the Umbrella of Azure Cognitive Services?
CHAPTER 4: Structured Content Processing
What's in This Chapter
What Is Structured Content Processing?
Creating Structured Content Processing Models in a Microsoft SharePoint Premium content center
Conclusion
CHAPTER 5: Unstructured Content Processing
What's in This Chapter
What Is Unstructured Content Processing?
Creating Unstructured Content Processing Models in a Content Center
Conclusion
CHAPTER 6: Image Processing
What Is in This Chapter
What Is an Image?
Entering the Matrix
Image Processing and Information Management
Configuring OCR in Microsoft Purview
Configuring OCR and Image Tagging in SharePoint Sites
CHAPTER 7: Content Assembly
What's in This Chapter
Getting Started
Document Templates
Creating a Modern Template
Example: Creating a Nondisclosure Agreement Template
Example: Creating a Statement of Work Template
Example: Creating an Invoice Template
CHAPTER 8: Microsoft Search with SharePoint Premium
What's in This Chapter
Getting Started
What Is Microsoft Search?
Optimizing for Search
Customizing the Search Experience
SharePoint Premium Content Query
CHAPTER 9: SharePoint Premium Administration Features
What's in This Chapter
Information Management at Scale
Archive and Backup
CHAPTER 10: Microsoft 365 Assessment Tool
Why Use the Microsoft Syntex Assessment Tool?
Setting Up the Microsoft Syntex Assessment Tool
Running the Microsoft Syntex Assessment Tool and Generating Reports
Reports Generated by the Microsoft Syntex Assessment Tool
CSVs Generated by the Microsoft Syntex Assessment Tool
Conclusion
CHAPTER 11: Extending Microsoft SharePoint Premium with Power Automate
What's in This Chapter
Using Microsoft SharePoint Premium Triggers with Power Automate
Using Microsoft Syntex Actions with Power Automate
Getting Started
Example: Send an Email after Microsoft SharePoint Premium Processes a File
Example: Use Power Automate to Generate a Document Using Microsoft SharePoint Premium
APPENDIX A: Preparing to Learn Microsoft SharePoint Premium
Creating a Sandbox Tenant
APPENDIX B: The Next Big Thing(s)
What's in this Appendix:
Autofill Columns
Content Governance
Copilot
Document Experiences
Document Library Templates
eSignature
Translation
Workflow Migration
Conclusion
Index
Copyright
Dedication
About the Authors
About the Technical Editor
Acknowledgments
End User License Agreement
Chapter 2
Table 2.1: Intrinsic metadata common to all SharePoint content
Table 2.2: Default columns visible in a calendar/event list
Chapter 3
Table 3.1: English languages supported by the Azure speech to text service...
Table 3.2: Supported languages between all Azure Speech Services
Table 3.3: Content Moderator scores for images provided by Microsoft
Chapter 4
Table 4.1: Confidence Score Meanings
Chapter 5
Table 5.1: Token examples and explanations
Chapter 9
Table 9.1: SharePoint Site Types
Chapter 2
Figure 2.1: Content life cycle
Figure 2.2: Compliance Center dashboard
Figure 2.3: The E3 option vs. the E5 option
Figure 2.4: Information Protection labels
Figure 2.5: Typical label policy summary
Figure 2.6: High-level settings for Data lifecycle management and Records man...
Figure 2.7: Records Management settings
Figure 2.8: SharePoint site collections
Figure 2.9: Lists, libraries, and subsites in a SharePoint site
Figure 2.10: Some view types
Figure 2.11: SharePoint default content types
Figure 2.12: Adding columns through a library view
Figure 2.13: Adding columns to a content type
Figure 2.14: Term store hierarchy
Figure 2.15: Adding a managed metadata column to a library
Figure 2.16: Managed Metadata browse tag
Figure 2.17: Managed metadata browse interface
Chapter 4
Figure 4.1: Example of a typical invoice Source: Adapted from Confidence scor...
Figure 4.2: Page 1 of an example enterprise agreement
Figure 4.3: Example of a SharePoint content center for Microsoft SharePoint ...
Figure 4.4: The Options For Model Creation screen with additional model type...
Figure 4.5: The Options for creating a new item in the Microsoft Syntex Cont...
Figure 4.6: The invoice processing prebuilt model details screen
Figure 4.7: The invoice processing prebuilt model options screen
Figure 4.8: The Advanced Settings options
Figure 4.9: The invoice processing prebuilt model details screen with the co...
Figure 4.10: The Retention Label options
Figure 4.11: The new Contoso Invoice model page
Figure 4.12: The New Contoso Invoice model page (continued)
Figure 4.13: The model settings of the new Contoso Invoice model
Figure 4.14: Adding a file to the new Contoso Invoice model
Figure 4.15: Adding a file to the new Contoso Invoice model modal pop-up
Figure 4.16: Selecting a training file in the New Contoso Invoice model moda...
Figure 4.17: Adding a file to the new Contoso Invoice model
Figure 4.18: Adding extractors to the new Contoso Invoice model
Figure 4.19: Viewing identified extractors in the new Contoso Invoice model...
Figure 4.20: Selecting extractors in the new Contoso Invoice model
Figure 4.21: Validating extractors in the new Contoso Invoice model
Figure 4.22: Adding where the model is applied to the new Contoso Invoice mo...
Figure 4.23: Applying the new Contoso Invoice model to a site
Figure 4.24: Applying the Contoso Invoice model to a library
Figure 4.25: A typical SharePoint library
Figure 4.26: Applying the Contoso Invoice model to a library
Figure 4.27: A typical SharePoint library with model information shown
Figure 4.28: Applying the Contoso Invoice model to a library—advanced settin...
Figure 4.29: The Contoso Invoice model has been applied to a library.
Figure 4.30: Finishing the Contoso Invoice model
Figure 4.31: The Contoso Invoice model is complete.
Figure 4.32: Contoso Invoice model details
Figure 4.33: A SharePoint library with the Contoso Invoice model applied
Figure 4.34: A SharePoint library with the Contoso Invoice model showing con...
Figure 4.35: Options for creating a new Syntex model
Figure 4.36: The Layout Method: Details page
Figure 4.37: The layout method properties page
Figure 4.38: The advanced settings on the layout method properties page—Adva...
Figure 4.39: Selecting a preexisting content type on the layout method prope...
Figure 4.40: The retention label options on the layout method properties pag...
Figure 4.41: The Choose Information To Extract page
Figure 4.42: Adding a new extractor
Figure 4.43: Adding a new text field extractor
Figure 4.44: Adding a new number field extractor
Figure 4.45: Adding the properties for the number field
Figure 4.46: Adding a new date field extractor for the invoice date
Figure 4.47: Setting the properties for the new date field extractor
Figure 4.48: Adding a table field extractor
Figure 4.49: Adding the properties for the table field extractor
Figure 4.50: Example of invoice table for extraction
Figure 4.51: Editing a table field extractor column
Figure 4.52: Editing the column properties for the table field extractor col...
Figure 4.53: Adding a new table field extractor column
Figure 4.54: Adding table extractor properties for the three remaining colum...
Figure 4.55: The Add Collections Of Documents screen
Figure 4.56: The Add Collections Of Documents screen
Figure 4.57: A new collection of do...
Figure 4.58: The layout method: adding documents to a new collection of docu...
Figure 4.59: Selecting the source from which to add documents to the new col...
Figure 4.60: Uploading selected documents
Figure 4.61: The selected documents have been successfully uploaded.
Figure 4.62: Documents added to the new collection of documents
Figure 4.63: The layout method: tag all documents
Figure 4.64: Example of the customer name from an invoice
Figure 4.65: Tagging the Customer Name field
Figure 4.66: Tagging the Subtotal field
Figure 4.67: Tagging the Invoice Date field
Figure 4.68: Tagging the Table field
Figure 4.69: Setting rows and columns in the table
Figure 4.70: The rows have been set
Figure 4.71: The columns have been set.
Figure 4.72: Setting the column headings
Figure 4.73: The column headings have been set.
Figure 4.74: The columns are skewed.
Figure 4.75: Subsequent documents
Figure 4.76: Model summary page
Figure 4.77: Your model is training.
Figure 4.78: The Model Details page: your model is training
Figure 4.79: Training is complete.
Figure 4.80: The model evaluation detailed report
Figure 4.81: The Quick Test panel on the model evaluation page
Figure 4.82: New document added to the Quick Test panel
Figure 4.83: The predicted value and confidence score
Figure 4.84: Selecting a site to apply the model
Figure 4.85: Selecting a library
Figure 4.86: Applying the model: final settings
Figure 4.87: The model is applied.
Figure 4.88: The model overview page: lets you easily access model details a...
Figure 4.89: Where the model is applied
Figure 4.90: The SharePoint library where the new model is deployed
Figure 4.91: The SharePoint library where the new model is deployed, with do...
Figure 4.92: The table field in the SharePoint library where the new model i...
Figure 4.93: The SharePoint list where the new model table data is deployed...
Chapter 5
Figure 5.1: Example of a typical invoice
Figure 5.2: The Options For Model Creation screen
Figure 5.3: The Microsoft Syntex content center landing page with sidebar sho...
Figure 5.4: The Import Sample Contracts Library screen
Figure 5.5: The new sample contracts library
Figure 5.6: The Review Models And Apply New Ones modal pop-up
Figure 5.7: Preview of the Microsoft Syntex model
Figure 5.8: The model page of the Microsoft Syntex content center
Figure 5.9: The import sample contracts library process is complete.
Figure 5.10: The model settings of a Microsoft Syntex model
Figure 5.11: The model settings of a content processing model (Compliance are...
Figure 5.12: The Entity Extractors section of a model
Figure 5.13: The Client entity extractor label screen
Figure 5.14: Labeled data in the Label tab of the unstructured content model...
Figure 5.15: A document marked as No Label in the unstructured content model...
Figure 5.16: Example of the View Original modal pop-up window
Figure 5.17: The Find functionality showcasing predictive text
Figure 5.18: The Entity Extractors section of a model showing a requirements ...
Figure 5.19: The Train tab of the extractor pages for an unstructured content...
Figure 5.20: The Before Label rule for the Client extractor
Figure 5.21: The before-label rule for the Client extractor with multiple phr...
Figure 5.22: The advanced settings of the Client extractor before-label rule...
Figure 5.23: The before-label rule for the Client extractor using the Beginni...
Figure 5.24: The before-label rule for the Client extractor using the custom ...
Figure 5.25: The Test tab of the Client extractor
Figure 5.26: The Test tab of the Client extractor with all documents analyzed...
Figure 5.27: The Label tab of the Client Address extractor
Figure 5.28: The Train tab of the client address
Figure 5.29: The before-label rule of the Client Address extractor
Figure 5.30: The after-label explanation of the Client Address extractor
Figure 5.31: The Advanced Settings area of the after-label explanation for th...
Figure 5.32: Phrase list explanation templates
Figure 5.33: The phrase list explanation templates showing the date template...
Figure 5.34: The date of explanation for the Period End extractor
Figure 5.35: The Client extractor with Accessibility Mode turned off
Figure 5.36: The Client extractor with Accessibility Mode turned on
Figure 5.37: A selected word in the Client extractor using accessibility mode...
Figure 5.38: The Models tab of a content center
Figure 5.39: The Models tab of a content center after importing the Benefits ...
Figure 5.40: The model maintenance page for the Benefits Change Notice sample...
Figure 5.41: The model maintenance page for the Benefits Change Notice sample...
Figure 5.42: The Options For Model Creation modal pop-up screen
Figure 5.43: The options for the teaching method model creation modal pop-up ...
Figure 5.44: The Create A Model With The Teaching Method screen
Figure 5.45: The Create A Model With The Teaching Method screen with the adva...
Figure 5.46: The Create A Model With The Teaching Method screen with the adva...
Figure 5.47: The Create A Model With The Teaching Method screen with the adva...
Figure 5.48: The Create A Model With The Teaching Method screen with the adva...
Figure 5.49: The Create A Model With The Teaching Method screen with the adva...
Figure 5.50: The new Trade Confirmation model, top of page
Figure 5.51: The new Trade Confirmation model, bottom of page
Figure 5.52: The sensitivity label settings in the new Trade Confirmation mod...
Figure 5.53: The retention label settings in the new Trade Confirmation model...
Figure 5.54: Adding example files to the new Trade Confirmation model
Figure 5.55: Adding example files to the new Trade Confirmation model
Figure 5.56: The new Trade Confirmation model after adding example files
Figure 5.57: The new Trade Confirmation model after adding example files
Figure 5.58: The new Trade Confirmation model after adding example files
Figure 5.59: Example of an actual file before being viewed in the classifier....
Figure 5.60: The label phase of classifying files in the Trade Confirmation m...
Figure 5.61: The Train phase of classifying files in the Trade Confirmation m...
Figure 5.62: The Train phase of classifying files in the Trade Confirmation m...
Figure 5.63: The top of the heading explanation for the classifier in the Tra...
Figure 5.64: The advanced settings of the heading explanation for the classif...
Figure 5.65: The Train phase of classifying files in the Trade Confirmation m...
Figure 5.66: The Test phase results of classifying files in the trade confirm...
Figure 5.67: The trade confirmation model details page after classification c...
Figure 5.68: The new entity extractor side panel in the trade confirmation mo...
Figure 5.69: The new entity extractor side panel in the trade confirmation mo...
Figure 5.70: The Label tab of the Trade Number extractor
Figure 5.71: The Label tab of the trade number extractor with labeling comple...
Figure 5.72: The Label tab of the trade number extractor with a mismatched da...
Figure 5.73: The Train tab of the trade number extractor
Figure 5.74: The Explanation templates modal popup for the trade number extra...
Figure 5.75: The Before Label explanation for the trade number extractor
Figure 5.76: The Advanced settings of the Before Label explanation
Figure 5.77: The Before Label explanation after training the first time
Figure 5.78: The Test tab of the Before Label explanation
Figure 5.79: The Entity Extractors panel of the Trade Confirmation model
Figure 5.80: The Label tab of the Reference Number extractor
Figure 5.81: The settings for the proximity explanation
Figure 5.82: The Train tab of the Reference Number extractor after training
Figure 5.83: The Trade Confirmations document library before model deployment...
Figure 5.84: The select SharePoint site screen of the apply model side panel...
Figure 5.85: The select SharePoint document library screen of the apply model...
Figure 5.86: The confirmation screen of the apply model side panel
Figure 5.87: The final confirmation message of the apply model side panel
Figure 5.88: The new Trade Confirmation view of the Trade Confirmations libra...
Figure 5.89: The new Trade Confirmations document library with uploaded docum...
Chapter 6
Figure 6.1: ASCII image rendering
Figure 6.2: Getting from conception to publication
Figure 6.3: Tagged images in a SharePoint library
Figure 6.4: Searching for tagged images
Figure 6.5: Processed image properties
Figure 6.6: Initial OCR setup screen
Figure 6.7: All or nothing, plus exceptions
Figure 6.8: Selecting a repository
Figure 6.9: Manually selecting a SharePoint site type repository
Figure 6.10: Microsoft Syntex Configuration Options
Figure 6.11: SharePoint site selector
Chapter 9
Figure 9.1: Purchasing SharePoint advanced management
Figure 9.2: SharePoint advanced management reports
Figure 9.3: SAM consolidated features blade
Figure 9.4: Details about Site lifecycle management
Figure 9.5: Disabling anonymous reports
Figure 9.6: Sharing links reports
Figure 9.7: Label report list
Figure 9.8: Selecting your label reports
Figure 9.9: Selecting a default label in a library
Figure 9.10: Site owner notice of inactivity
Figure 9.11: Policy execution summary
Figure 9.12: Policy scope settings page
Figure 9.13: Chargeability of Archive storage
Figure 9.14: The Archived Sites blade
Figure 9.15: The Archive site option
Figure 9.16: Group site archive confirmation
Figure 9.17: Reactivation confirmation
Cover
Title Page
Copyright
Dedication
About the Authors
About the Technical Editor
Acknowledgments
Foreword—SharePoint Premium
Introduction
Table of Contents
Begin Reading
APPENDIX A: Preparing to Learn Microsoft SharePoint Premium
APPENDIX B: The Next Big Thing(s)
Index
Wiley End User License Agreement
iii
xxi
xxii
xxiii
xxiv
xxv
xxvi
xxvii
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
iv
v
vii
viii
ix
xi
xii
403
Jacob SanfordWoodrow WindischmanDustin Willard, Microsoft MVPRyan Dennis
Chris McNulty
I'm honored to be asked to contribute the foreword for this book, Microsoft SharePoint Premium in the Real World. I've known Jacob, Woody, Dustin, and Ryan for years and think you'll find this book to be a worthy addition to your knowledge about maximizing the value of your Microsoft 365 content. They've been actively engaged with Microsoft as we've developed new solutions, and they bring experienced, practical advice about making this vision real for you. Regardless of your role with content, this book is for you.
I joined Microsoft in 2015 to help drive our solutions for content management. For more than two decades, SharePoint has been the engine powering collaboration and content management for our customers. From its early days as a workgroup solution, it has skyrocketed in the cloud to become the world's leading content platform, highly flexible in driving core collaboration across applications like Office and Teams as well as custom applications. Every workday, our customers add over two billion documents to Microsoft 365 and to SharePoint. Understanding how best to shepherd and channel this intensity is more critical than ever.
Over the past 18 months, the digital world has seen the rise of generative AI, led by pioneering solutions like ChatGPT and Copilot, both powered by Microsoft. This is a massive opportunity to transform so much of our digital realm—across employee experience, collaboration, and business process. Generative AI like Copilot works best when it's grounded in content, leveraging the unique intellectual property, creativity, and competitive advantage stored in your files. AI loves content, and there's a huge opportunity since SharePoint is the world's home for content.
The range of digital transformation that we are witnessing is unprecedented. Full disclosure, I'm anchored in Microsoft-connected events—but I think these trends go beyond our own solutions to embrace the digital world.
In the first wave of public cloud, our customers began to shift on-premises, server-based workloads like Exchange and SharePoint to the cloud. From 2012 through 2017, we focused on re-platforming our digital capabilities as a basis for further cloud innovations.
Starting around 2017, Microsoft was again at the forefront of a second wave of digital transformation, bringing forth new, cloud-born applications and services that had never been possible in on-premises data centers. Fueled by nearly limitless capacity and power, and building on established patterns, Microsoft created new capabilities, such as Microsoft Teams, to transform meetings, communication, collaboration, and application delivery.
In 2020, driven by the COVID-19 pandemic, the world was able to leverage these first two waves to empower remote work and productivity as well as removing geography as a requirement for live meetings and communication.
The third wave of digital transformation began in 2022, as classic AI patterns have been extended to everyday information workers. This third wave of digital transformation sits above existing cloud foundations and cloud innovations and defines new patterns of interacting with each other, and with our ideas and activities expressed as content in the Microsoft Cloud.
Going back to the foundational days of Project Cortex in 2019, Microsoft has been executing a vision that we can distribute AI and automation to everyone to ensure that we can build, interact with, and manage our most critical information throughout its full life cycle. The Microsoft Graph provides critical signals to tailor and personalize your content experiences. In this wave, Microsoft has introduced Copilot, built in partnership with OpenAI, as well as extended AI-enhanced solutions like Teams Premium, Syntex, and SharePoint Premium.
We have only just begun to imagine how this can work. Think about the simple out of office message. While you're away, it does a very simple thing—sending a custom message—on your behalf. Imagine a world where AI can dynamically respond to new messages, automatically grounded in your communications and your content to respond automatically. In this world, you might leave your “out of office” attendant on all the time.
We are well beyond the early days of treating the cloud as a simple place for small teams to share files. They said “content is king” going back to the early days of enterprise content management in the 1990s. That is truer than ever before.
Last year, we announced SharePoint Premium, our new wave of integrated apps and services to drive content processing, business processes, and content governance for all content and all files, across every device, and for every user. Building on the patterns of Project Cortex and Microsoft Syntex, SharePoint Premium can automatically read, tag, classify, and process your content; drive workflows for approval and digital signatures; and build new content based on your templates and your data. And all of this is done without custom code, delivered through the apps you use every day: in Office, in Teams, in Outlook, and more.
SharePoint Premium and SharePoint Embedded offer a range of benefits for developers. With SharePoint Premium, developers can leverage the platform's ability to automatically process content.
SharePoint Embedded (SPE), on the other hand, provides developers with the ability to manage application files and documents within their customers' individual Microsoft 365 tenants rather than setting up a separate repository outside tenant boundaries. It creates a new scalable pattern to deliver file and document management capabilities in custom applications.
SPE allows developers to use backend capabilities such as versioning, sharing, search, coauthoring, retention, and sensitivity labels with content in custom apps. This can help customers scale and manage content, while connecting to their existing workflows and providing flexibility when they need it.
One of the key challenges for IT pros is ensuring the protection and retention of their organization's data in Microsoft 365. Data loss can occur due to accidental deletion, malicious attacks, ransomware, or compliance violations. To help IT pros address these risks, we introduced two innovative solutions that are related to SharePoint Premium: Microsoft 365 Backup and Microsoft 365 Archive.
Microsoft 365 Backup provides a comprehensive backup solution for your data in SharePoint, OneDrive, and Exchange Online. It allows you to back up and restore data in place at unprecedented speeds and scale, using the same Microsoft trust boundary and security benefits as the rest of Microsoft 365 uses.
Microsoft 365 Archive is a cost-effective storage solution for your inactive or aging content. It allows you to archive data in place using tiered storage, retaining Microsoft 365's security, compliance, search, and rich metadata capabilities. You can define policies and rules to automatically move data to the archive tier based on criteria such as age, activity, or sensitivity. You can also access and manage your archived data seamlessly through the Microsoft 365 user interface.
I am always surprised and delighted to discover the myriad ways our customers leverage Microsoft 365 and SharePoint to create new solutions to business challenges that were unknown even a few years ago. I'm glad to help contribute, a bit, to bringing this book forward into the world to support your innovations in the new wave of digital cloud content. Thank you.
Chris McNulty is Director of Product Marketing for Microsoft 365, SharePoint Premium, OneDrive, SharePoint, and Stream. A co-creator of Microsoft Viva and Syntex, Chris's experience as CTO includes companies such as Dell and Quest Software. He was first recognized as a SharePoint MVP in 2013. A frequent speaker at events around the globe, Chris is the author of the SharePoint 2013 Consultant's Handbook among other works. Chris holds an MBA from Boston College in Investment Management and has over 20 years' experience with John Hancock, State Street, GMO, and Santander. He blogs athttps://techcommunity.microsoft.comand cohosts the Intrazone podcast athttps://aka.ms/TheIntrazone.
Greetings, Professor Falken. Would you like to play a game?
Joshua, War Games
We are all, by any practical definition of the words, foolproof and incapable of error.
I'm sorry, Dave. I'm afraid I can't do that.
Hal 9000, 2001: A Space Odyssey
AI. Artificial Intelligence. For as long as there have been computers, popular culture has ascribed human—even superhuman—thoughts to these “electronic brains.” We've given them names, relied on them for companionship and assistance, and feared for our lives when they have broken free of our control. At least in fiction.
Yet the reality has been far different. Data scientists have worked for decades to get computers to understand and retrieve information in the same way we humans do. Much progress has been made in specific areas, but the creation of an entity that truly mimics the human mind in all of its nuance remains elusive.
Even without achieving that holy (or unholy) grail, the fruits of AI research are all around us. Biometric access to computers and smart phones, automatic language translation, and digital assistants like Alexa and Siri are just a few of the applications taken for granted today, and more examples are on the horizon.
While “conversational” or “generative” AI—such as ChatGPT, Google Bard, and others—is stealing headlines, Microsoft SharePoint Premium is ensuring that the power of AI is also hard at work behind the scenes, analyzing documents, automating data collection, even managing information life cycles.
By picking up this book, you've taken the first step toward bringing this new set of AI technologies to bear on your business. While we're going to focus on Microsoft SharePoint Premium itself, you'll also learn how it integrates with a range of other tools and services. Working together, they will help your users find the needles in the ever-growing haystack of information being added to your systems on a daily basis.
This book is a one-stop guide for anyone who wants to get up to speed on Microsoft SharePoint Premium and its related services. It is designed to help you discover how you can leverage this powerful set of tools to make information more readily available to your users.
If you're an experienced information manager, you will find chapters to help you understand the practical capabilities of the tools.
If you're a technologist, you'll get the ins and outs of configuring the system.
We'll also work to establish a common baseline for everyone from beginners to experts to understand what information management is and how the whole technology stack maps into these needs.
First and foremost, this book covers Microsoft SharePoint Premium. This is not just a single product but a family of tools that work together. The SharePoint Premium family uses content AI and machine learning to aid in the processing of documents throughout their entire life cycle.
This book also covers some of the features and functions of Microsoft 365—and particularly SharePoint Online—that are the beneficiaries of SharePoint Premium processing. In particular, metadata, content management, and the information life cycle are deeply intertwined.
We will talk about other pieces of the Microsoft Cognitive Services landscape, how they are leveraged by SharePoint Premium, and how they can be used to extend its capabilities.
We will also touch on other Microsoft 365 services that leverage these services, such as Copilot, Topics, and the Microsoft Purview compliance center.
The 11 chapters of this book cover three broadly related subjects. There are also two appendices.
First, we provide a baseline of concepts and terminology relating to artificial intelligence, information management, and the Azure cognitive services underpinning many SharePoint Premium features.
Next, we'll discusses the primary features of SharePoint Premium in relation to the business needs they address. Each chapter includes examples and exercises to help you see how these can apply in your organization.
Finally, we'll go beyond document processing and into detail on how SharePoint Premium integrates with other tools and business processes. We'll also shows you how to extend SharePoint Premium by leveraging tools available to both professional and “citizen” developers.
Appendix A helps you set up a sandbox environment, and Appendix B will cover late-breaking updates to SharePoint Premium that we couldn't incorporate into the main chapters.
This book is structured so that you can get a high-level understanding of its concepts just by perusing its chapters. However, to get the most out of it, you will need access to a Microsoft 365 tenant that has the appropriate licensing to access Microsoft SharePoint Premium. There are several ways to accomplish this.
Of course, the easiest way is to use your own enterprise environment. However, it is likely that many information managers will not have the level of access or licensing they need. Most enterprises have strict policies regarding access to advanced features, especially if they could be interpreted as being “administrative” in nature.
This book will help you gain the understanding necessary to use these features effectively and successfully provide business justification for their enablement. However, until you complete the book and have that understanding, it may be hard to provide justification for your own access. Therefore, you will probably have to make use of a temporary environment. This is sometimes called a sandbox, which is basically someplace you can “play” safely. Appendix A, “Preparing to Learn Microsoft SharePoint Premium,” will walk you through the steps for creating a sandbox environment.
Artificial intelligence (AI) is changing the world as we speak and will continue to do so over the next 10 years and into the future. It will affect our kids and their families, and it will certainly affect us.
“Artificial intelligence would be the ultimate version of Google. The ultimate search engine that would understand everything on the Web. It would understand exactly what you wanted, and it would give you the right thing. We're nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on.”
—Larry Page
“The pace of progress in artificial intelligence (I'm not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like DeepMind, you have no idea how fast it is growing at a pace close to exponential.”
—Elon Musk
Artificial intelligence will help assist workers rather than replace them altogether. For years we have heard that humans will one day be replaced by computers; while that is true for some tasks, artificial intelligence will aid many of us with our mundane day-to-day tasks and allow us to focus on higher-value activities.
Tools like OpenAI's ChatGPT will help make access to information quicker and easier by allowing us to ask it to explain things in a way that we can understand without necessarily having to do a tremendous amount of research. For example, a programmer trying to write a line of code could ask ChatGPT to write it by describing what they are trying to accomplish. Still, the output will require the programmer to validate and optimize the code for their exact situation and improve the performance of the code to ensure it is flawless.
Another good example is the use of robot vacuums; many of us have embraced and use them daily. While these vacuums do a great job in general, there are hard-to-reach places in our homes that they can only reach with a human intervening to either move the obstruction or perform the vacuuming themself.
Artificial intelligence takes many forms in the world. Each form has its place, and that's what makes artificial intelligence truly intelligent. In the upcoming sections, the following types of AI will be reviewed: machine learning, machine teaching, reinforcement learning, computer vision, natural language processing, deep learning, and robotics.
Machine learning is a type of artificial intelligence that attempts to develop statistical models and algorithms that help computers make decisions or predictions by learning from data to perform a task. The following image is an example of a machine learning model leveraging the Microsoft Azure platform.
The two main types of machine learning algorithms are supervised learning algorithms and unsupervised learning algorithms:
Supervised Learning
: A type of machine learning in which the algorithm's dataset is labeled and trained and the correct output is provided for each input. Supervised learning attempts to teach a model that predicts net-new data production. Here are some examples of supervised learning algorithms:
Linear Regression
: Used for regression problems that aim to predict a continuous value. The algorithm finds the line of best fit that lowers the difference between the predicted and actual values.
Linear regression is a way of finding a relationship between two things, like how the amount of rain affects the amount of water in a bucket. We can use this relationship to make predictions, like how much water will be in the bucket if it rains a certain amount.
Imagine we have a ball that can go faster or slower depending on how much we push it. Using a scale to measure the force, we can measure how fast the ball goes utilizing a stopwatch.
The harder we push the ball, the faster it goes. We can use linear regression to create a simple equation that helps us predict how fast the ball will go based on how hard we push it. The equation might look something like this:
Source: Microsoft / https://learn.microsoft.com/en-us/azure/architecture/example-scenario/ai/many-models-machine-learning-azure-machine-learning/ last accessed March 28, 2023
This means that if we push the ball with a force of 8, we can predict that the ball will go at a speed of 19 (2 × 8 + 3). If we push the ball with a force of 12, we can predict that the ball will go at a speed of 27 (2 × 12 + 3).
Logistic Regression
: Used for binary classification issues. The goal is to predict one of two possible outcomes. The algorithm finds the best boundary that divides the data into separate classes.
Logistic regression is a way of predicting whether something will happen. It is like guessing if it is going to rain tomorrow. We can use logistic regression to make predictions based on patterns in the data we collect.
For example, say we have a bag of candy; some are blue, and some are red. Could we predict whether we will pick a blue or red candy from the bag?
We can use logistic regression to create a simple equation that helps us make this prediction based on the features of the candy, such as its size or weight.
Using logistic regression, we can predict whether something will happen based on the patterns we observe in the data. This is a valuable tool for understanding the world and making informed decisions.
Decision Trees
: Used for classification and regression problems. The algorithm builds a treelike structure by dividing the data into smaller and smaller groups based on the values of the features.
Decision trees are a way of making decisions by following a series of steps, like a flowchart. So based on the information we have, we can use decision trees to choose for us.
Imagine we want to decide what to do based on the weather outside. We can create a decision tree with two branches, one for nice days and one for stormy days. Here is an example of a decision tree.
This decision tree tells us that if it is nice outside, we should play tennis, but if it is raining, we should play video games instead.
We can make the decision tree more complex by adding more branches and choices, such as it is too cold to be outside if the temperature is below 45 degrees. This helps us make better decisions by considering all the factors that might be important.
Using decision trees, we can make choices based on our information and follow a logical process to make the best decision. This is a valuable tool for making decisions in everyday life, like what to eat for breakfast or what game to play with friends.
Neural Networks
: Used for many problems, such as natural language processing, image classification, and speech recognition. Neural networks consist of levels of interconnected neurons that process the input data and generate the output.
Neural networks are like magic boxes that can learn to do things alone. They comprise many small parts that work together to solve problems, similar to how our brain works.
Suppose we wanted to teach a computer to recognize different birds. We can use a neural network to do this. We can show a neural network pictures of birds like eagles, hawks, and sparrows. The neural network will examine the images and determine what makes each animal different.
As the neural network learns, it will start recognizing picture patterns. For example, it might know that eagles have large wingspans and large talons. It will use these patterns to make predictions about new pictures it sees.
Once the neural network has learned enough, we can give it a new picture and ask it to tell us what bird is in the image. It will use what it learned to predict the bird in the picture, whether it's an eagle or a hawk.
We can use neural networks to teach computers to learn and make decisions independently. This is a powerful tool that can be used to solve many problems, like recognizing animals, predicting the weather, or playing games.
Unsupervised Learning
: A machine learning algorithm trained on an unlabeled dataset without the correct output. Unsupervised learning attempts to locate patterns and structures in the data it has not yet learned. Here are some examples of unsupervised learning algorithms:
K-Means Clustering
: Used for clustering problems to separate the data into a designated number of groups that are called clusters. The algorithm finds the centers of each cluster and can assign each data point to the closest center.
K-means clustering is a way of organizing things into groups based on their similarities. We can use K-means clustering to group together things that are similar.
For example, say we have a bunch of nails, some long and some short. We can use K-means clustering to group the nails based on their appearance.
Let's start by grouping the nails based on their color. We could put all the black nails in one group, all the white nails in another, and so on. Then, we could group the nails within each color group based on their shape, like all the nails that are long in one group, all the nails that are short in another group, and so on.
After we have sorted the nails into groups, we could see that all the black and long nails were together in one group and all the white and long nails were together in another group.
Using K-means clustering, we can group things based on their similarities. This is a valuable tool for organizing things in a way that makes sense, like grouping toys or sorting items by color and shape.
Autoencoders
: Used for dimensionality reduction, anomaly detection, and generative modeling. The goal is to learn a compressed data representation and reconstruct the original data from the compact model.
Autoencoders are a type of machine learning that can learn how to create things to be like what they have seen before. They are a form of unsupervised learning, meaning they can know without someone telling them the correct answer.
Imagine we want to draw a picture of an eagle but we could be better at drawing. We can use an autoencoder to help us create an eagle that looks like a real eagle.
The autoencoder will look at many pictures of eagles and figure out what makes an eagle look like an eagle. It might learn that eagles have long wingspans and large talons.
Then, when we give the autoencoder a blank piece of paper, it will use what it learned to create a picture of an eagle. It might draw a bird with a long wingspan and large talons that looks like an eagle.
Using autoencoders, we can create things similar to what we have seen before, even if we aren't very good at creating them ourselves. This is a valuable tool for making pictures or music or solving problems like recognizing objects in a picture or predicting the weather.
Supervised learning is used when the goal is to predict output for new data, and unsupervised learning is used when the goal is to find patterns and structures in the data. The choice of an algorithm should depend on the problem type and the amount of data available. It is essential to carefully evaluate the performance of different algorithms and choose the best fit for the situation.
Machine teaching is one of the forms of AI that Microsoft Syntex uses, and it focuses on creating a training algorithm for machine learning models by providing high-quality, representative examples of inputs and desired outputs. Machine teaching attempts to help machines learn to recognize patterns or perform particular tasks by providing guidance and feedback through carefully selected training data. Unlike traditional machine learning, where models are trained using large amounts of data and a single objective function, machine teaching is focused on a specific desired outcome and providing tailored training data to achieve that outcome. This approach can improve the performance and robustness of machine learning models and create specialized models for specific use cases. There are two primary forms of machine teaching:
Supervised Machine Teaching
: This approach provides labeled examples to a machine learning system, allowing it to learn from the models and predict new data. The human expert is responsible for marking the training data and guiding the machine learning system as it makes predictions.
Imagine that you want to teach a computer to recognize different foods, like meat, vegetables, and fruit. First you would need to show the computer pictures of different types of food and label them as meat, vegetable, or fruit. For example, you show the computer a picture of a T-bone steak and mark it as meat.
Then you show the computer pictures of other types of food and tell it what they are called. The computer learns to recognize the patterns and features of each food and associates them with its name.
Once the computer has learned from many examples, you can provide a new picture of a food item and ask it to tell you what it is. The computer will use what it has learned to make an informed guess.
In supervised machine teaching, the goal is to teach the computer to recognize the patterns and features of different types of food to identify them correctly in new pictures. It's like you are teaching the computer to identify foods just as you can.
Interactive Machine Teaching
: This approach involves an iterative process in which the human expert provides feedback to the machine learning system, allowing it to improve its predictions gradually. The machine learning system may ask questions or make predictions, and the human expert provides feedback on the accuracy of the predictions. This approach allows the human expert to refine the machine learning model over time and make it more accurate.
For example, say you want to teach a computer to play a game with you, like a fill-in-the-blank game.
Interactive machine teaching is like playing a game with the computer and giving it feedback on its suggestions. For example, suppose the computer picks the correct letter. In that case, you could say, “Nice work!” If it chooses the wrong letter, you might say, “Not so good.” As you play the game with the computer and give it feedback, the computer learns to make better moves and improve its ability to pick the correct letters.
Interactive machine teaching attempts to teach the computer to play the game well by giving feedback on its moves. It is like teaching the computer to become a better gamer, just as a doctor helps you become healthier.
In both forms of machine teaching, the human expert plays a critical role in guiding the machine learning process. By working with the machine learning system, the expert can impart their expertise and help the system make more accurate predictions. Additionally, by leveraging the efficiency and scalability of machine learning algorithms, machine teaching can help automate many tasks previously performed by people, freeing up time for more creative and strategic work.
Now that you have a basic understanding of interactive machine teaching, it probably isn't hard to imagine a real-world scenario that could benefit your professional life. To illustrate, one of the authors engaged in a fascinating project using this approach. A large international retail client wanted to have its users post questions to Human Resources in a specific community inside Microsoft Yammer. Without going too far into the overall architecture, the solution used Power Automate to intercept the email from the Yammer community when a question was asked and then sent the question to Azure Cognitive Services. Cognitive Services then sent a response back with a confidence score, signifying how confident it was that the service had provided an accurate answer to the question. If the confidence score fell below a threshold set by the team, a message was sent to a subject matter expert (SME) to review the response. If the SME approved the answer, the response got sent back to Yammer as the response to the question. However, if the SME didn't like the response, they would submit a new response to the question. This unique response would get sent back to Yammer as the answer to the question and to Cognitive Services to be saved as the answer so that Cognitive Services could learn and get smarter. While this is a summary of months of work, hopefully it can show how this approach to AI can be used in today's modern world.
Reinforcement learning (RL) is a field of machine learning concerned with developing algorithms that allow agents to learn how to make optimal decisions based on feedback from their environment. In RL, an agent interacts with a background, acting and receiving input through rewards or penalties. The agent attempts to learn a policy that maximizes its cumulative reward over time.
In RL, the agent learns by trial and error, gradually improving its policy through experience. This contrasts with other types of machine learning, such as supervised learning, where the training data is labeled and the algorithm learns to map inputs to outputs. The essential components of a reinforcement learning problem include an agent (the learner or decision maker that interacts with the environment), environment (the external system that the agent interacts with), state (the current situation of the atmosphere at a particular time), action (the decision or choice that the agent makes based on the current state), reward (the feedback that the agent receives from the environment for its action), and policy (the mapping between states and activities that the agent uses to make decisions).
The main objective of RL is to learn an optimal policy that maximizes the expected cumulative reward over time. This can be done using different algorithms, such as Q-learning, State–action–reward–state–action (SARSA), and reinforcement learning, which use various techniques to learn the optimal policy.
RL has been successfully applied to various applications, including game playing, robotics, and autonomous driving. It is a powerful technique for developing intelligent systems that can learn from their environment and adapt to changing conditions.
Imagine you have a drone you can control with a remote control device. You want the drone to reach a specific target on the other side of the room, but there are obstacles.
Reinforcement learning is similar to you trying different ways to fly the drone toward the target, and each time you get closer, you get a point. If you hit an obstacle, you lose a point.
If you keep trying different ways to fly the drone, you learn which actions get you closer to the target. This is the drone learning from its experiences, just as you are learning from yours. Eventually, you find the best way to get the drone to the target and score points.
Reinforcement learning attempts to find the best way to reach the target and get as many points as possible. The drone is like the agent, and the remote control device is like the policy, telling the drone what to do. The obstacles and targets are like the environment, and the points are like the rewards.
Computer vision is a type of AI that helps computers interpret and understand visual information, such as images and videos. The following image is an example of a computer vision model leveraging the Microsoft Azure platform.
Computer vision has several tasks, including object detection, image classification, and image segmentation.
Object Detection
: Used for identifying objects of interest in an image or video. Object detection models typically use convolutional neural networks (CNNs) to perform feature extraction and region-based algorithms such as Fast R-CNN or YOLO to detect objects in the image. Object detection has numerous real-world applications, such as self-driving cars, security systems, and medical imaging.
Object detection is a technology that helps computers identify and locate objects in pictures or videos. We can use object detection to help us find things we are looking for, like our toys or favorite animals.
Say we have a picture of a car lot with many vehicles. We might use object detection to find all the trucks in the picture.
Source: Microsoft / https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/ai/end-to-end-smart-factory/ last accessed March 28, 2023
The computer might look at each part of the picture and try to find things that look like trucks. When it finds a truck, it can put a circle or a square around it to show us where it is.
Object detection allows us to find things we seek more easily and quickly. This can be useful for many things, such as finding lost items or identifying animals or objects in pictures. It can even be used for safety and security purposes.
Image Classification
: Assigns a predefined label or category to an image based on its content. For example, an image classification model might be trained to classify images such as dogs, cats, or cars. Image classification models typically use CNNs as the basis for their predictions.
Image classification is sorting pictures into categories based on their appearance. We can use image classification to group photos based on their common characteristics.
For example, if we have a bunch of pictures of vehicles, we can use image classification to group the photos based on what kind of vehicle they show.
Let's start by looking at the pictures and identifying the different vehicles. We could put all the images of trucks in one group, all the pictures of sedans in another group, and all the photos of semi-trucks in a third group.
After we have sorted the pictures into groups, we can see that all the images of trucks are together in one group, all the pictures of sedans are together in another group, and all the photos of semitrucks are in a third group.
Using image classification, we can group pictures based on what they show. This is a valuable tool for organizing things that make sense, like sorting photos or identifying objects in an image.
Image Segmentation
: Divides an image into multiple segments, each corresponding to a different object or part of the image. Image segmentation models use various algorithms, such as region-based or graph-based approaches and semantic segmentation methods, to perform image segmentation. Image segmentation has applications in fields such as medical imaging, where it can be used to segment tumors or other structures, and computer graphics, where it can be used to separate the foreground and background of an image.
Image segmentation is a technology that helps computers identify different parts of an image and separate them into other groups. We can use image segmentation to help us understand pictures better and identify the various objects in them.
Just envision having a picture of a garden with different vegetables and flowers. We might use image segmentation to separate the concept into parts, like the vegetables and the flowers.
The computer might look at each part of the picture and try to group things that look the same together. When it finds a group, it can color it or put a line around it to show us where it is.
We can understand pictures better and identify their different objects by using image segmentation. This can be useful for many things, like finding animals in the woods, researching molecules in a cell, or even designing a room.
Natural language processing (NLP) is a version of AI that can enable machines to understand, interpret, and generate human language. The following image is an example of a natural language processing model leveraging the Microsoft Azure platform.
NLP tasks can be divided into text classification, sentiment analysis, and machine translation:
Text Classification