In this post I will show how to call Azure functions which are secured with AAD from C#. Let’s assume that you already created Function app and secured it with AAD. In this example I will use Advanced AAD authentication configuration of the Function app which may be used e.g. when Function app is created in one tenant but it’s Azure functions supposed to be used in another tenant (and thus should be secured against this other tenant):
Briefly in order to secure Azure function with AAD we need to register new app in the directory against which we want to authenticate users which will use Azure function (Azure Active Directory > App registrations > New app registration); in this example let’s call it test-secured-functions for reference. Then in authentication settings of the Function app: in Client ID specify app id of test-secured-functions, Client Secret – client secret of test-secured-functions (can be generated from app’s Settings > Keys) and in Issuer Url – url of the form https://sts.windows.net/{tenantId} where tenantId is id of the tenant where test-secured-functions is registered (can be found in Azure Active Directory > Properties > Directory ID).
After that copy base url of any Azure function (e.g. https://{azurefuncname}.azurewebsites.net – those which comes before /api/… part) in the Function app and add it to Allowed token audiences. Otherwise you will get HTTP 401 Unauthorized when will try to get access token for calling Azure function from C#.
Now the tricky part; go to test-secured-functions > Settings > Properties and change “Home page URL” and “App ID URI” properties to the base url of Azure function – same which was added to Allowed token audiences on previous step (https://{azurefuncname}.azurewebsites.net). It is more important to set “Home page URL” - it will be used as app id when we will get access token:
Now we may call our AAD secured Azure function from C#:
string aadInstance = "https://login.windows.net/{0}";
string tenant = "{tenant}.onmicrosoft.com";
string serviceResourceId = "https://{azurefuncname}.azurewebsites.net";
string clientId = "{clientId}";
string appKey = "{clientSecret}";
var authContext = new AuthenticationContext(string.Format(CultureInfo.InvariantCulture, aadInstance, tenant));
var clientCredential = new ClientCredential(clientId, appKey);
AuthenticationResult result = authContext.AcquireTokenAsync(serviceResourceId, clientCredential).Result;
Console.WriteLine(result.AccessToken);
Here you need to replace {tenant} with your tenant name, {azurefuncname} with name of your Azure function, {clientId} and {clientSecret} are from test-secured-functions app – same which were used in Fucntion app > Active Directory Authentication above. Notice that serviceResourceId variable contains base url of our Azure functions. I.e. we ask access token for scope = url of our Azure function. If we won’t set this property we will get the following error when try to call above code:
AADSTS50001: The application named https://{azurefuncname}.azurewebsites.net was not found in the tenant named {tenant}.onmicrosoft.com. This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant. You might have sent your authentication request to the wrong tenant
If we will decode token (e.g. on https://jwt.io/) we will see that it’s aud property is set exactly to requested serviceResourceId, i.e. base url of Azure function. This is why we set this base url to the app’s “Home page url” above – API searched for the app by this url during the call of AuthenticationContext.AcquireTokenAsync() method:
Once token is obtained we may call Azure function using regular way by providing Authorization HTTP header with Bearer token:
var request = WebRequest.Create("https://{azurefuncname}.azurewebsites.net/api/test") as HttpWebRequest;
request.Method = "POST";
request.ContentType = "application/json";
request.Headers.Add("Authorization", "Bearer" + " " + accessToken);
string data = ...;
var byteData = Encoding.UTF8.GetBytes(data);
request.ContentLength = byteData.Length;
var stream = request.GetRequestStream();
stream.Write(byteData, 0, byteData.Length);
using (var response = request.GetResponse() as HttpWebResponse)
{
...
}
Using this method you will be able to call your AAD secured Azure functions from C# or PowerShell. Hope this information will help someone.
Sometime we need to query list items against fields which have UserMulti type. In order to do that with Camlex we will use support of LookupMulti field types added few years ago in Camlex (see Camlex 4.2 and Camlex.Client 2.2 are released) and knowledge about behavior of Eq operator when it is used with multi lookup fields: Multi lookup fields in CAML queries. Eq vs Contains. Briefly when Eq operator is used with LookupMulti field types it has different semantic: instead of matching list items which actually equal to specified value it will match items which include this value. Consider the following example:
User user = web.EnsureUser(userName);
var query = new CamlQuery();
query.ViewXml = Camlex.Query().Where(x => x["TestField"] == (DataTypes.LookupMultiId)user.Id.ToString()).ToString(true);
var listItems = list.GetItems(query);
ctx.Load(listItems);
ctx.ExecuteQueryRetry();
At first it searches for the user by user name and then uses userId for creating the query. This query will return all items which include user with specific user id (in example above userId = 4) in TestField which has UserMulti type. This field may contain only this user or it may contain several users including those which we are searching for – in both cases list item will be returned. This was one of the reasons of why support of Includes keyword was not added to Camlex – there is no need for that as the same result may be achieved with simple Eq operator.
Today Camlex 5.0 and Camlex.Client 3.0 were released. In this release target framework version of both libraries has been changed from .Net 3.5 to .Net 4.5 in order to simplify usage of the libraries in the newer Sharepoint versions running on CLR 4.0 (SP2013/SP2016/SP2019) and Sharepoint Online. Also with this change it became possible to use Camlex.Client in Azure functions (V1 which use .Net 4.5) – earlier it was needed to recompile source code with targeted .Net 4.5 which was not convenient. Source code of basic Camlex library is avaialble in Github master branch, and source code of Camlex.Client is in client branch.
Also both Nuget packages have been updated: Camlex.NET.dll and Camlex.Client.dll. With this fix it will be easier to use Camlex with newer Sharepoint versions.
Update: after performing release described above I realized that it will be more convenient for Azure functions to reference Microsoft.SharePoint.Client.dll of v16.1 (Sharepoint Online). For console apps it won’t add complexity since it is quite easy to redirect assembly binding there via app.config. So I released new Camlex.Client 3.1 which is targeted for .Net 4.5 and references Microsoft.SharePoint.Client.dll of v16.1.
Sometime we need to get a list of all Office 365 groups where user is owner. It is relatively easy to get list of groups where user is a member using the following endpoints:
(First 2 end points work with app permissions while last endpoint works with delegated permissions). Unfortunately the same methods don’t work for owners. If you will try to user “ownerOf” in endpoints the following error will be shown:
Note that we’ve added “?$expand=owners” to the query string. With this additional param each group will be returned with list of it’s owners. After that yo may filter groups and include only those where current users is an owner. This is of course not so convenient and fast as above methods for owners but better than nothing.
Suppose that you have your product’s documentation in the Word format and at some point decide to create online version of this documentation. Built-in convert to html works not very well so what other ways are available? Below you will find several possible ways to convert product documentation from Word to online version. These methods are based on Pandoc project which may convert document between many popular formats.
1. Word to EPUB and then to html
This method is based on the fact that EPUB format is internally based on html. As a bonus it splits the Word document into separate html documents per chapter. So if you have single big Word document with many images it will be divided into several html chapters which is better for online version than single big html page.
At first we need to convert Word to EPUB:
pandoc -f docx -t epub –o output.epub input.docx
After that you will have output.epub ebook. Change extension from epub to zip and unzip the file to the local folder. In EPUB subfolder of this folder you will find the following files structure:
Here media folder will contain all exported images from the Word file and nav.xhtml will contain clickable table of contents. And text subfolder will contain html files for particular chapters:
2. Word to Markdown and then visualize with MkDocs site generator
With this method we at first generate Word document to Markdown format using the same Pandoc tool:
pandoc -f docx -t markdown –o output.md input.docx --extract-media media
Here we explicitly specified folder where Pandoc should extract images from Word document. When we have got Markdown file we may create static site for it using MkDocs tool. With this tool we at first need to create new project folder and put mardown file with images there:
python -m mkdocs new test
It will also put the following mkdocs.yml file to site’s root folder:
site_name: My Docs
and then run:
python -m mkdocs serve
which will launch local web server which will host your online documentation. Also it is possible to choose different UI themes from the list of themes available on MkDocs site.
Sometimes we need to get current user’s principal in Azure function in order to perform does user has permissions to perform requested action (of course when call to Azure function is done with user context). Recently MS announced feature called ClaimsPrincipal binding data for Azure Functions. With this feature it should be possible to inject client principal as function parameter:
public static IActionResult Run(HttpRequest req, ClaimsPrincipal principal, ILogger log)
{
// ...
return new OkResult();
}
Note that according to documentation this feature will be only available for Azure functions which use v2 runtime (which also means that they use .Net Core instead of .Net Framework). I tested it and at least currently this feature is not available for my dev tenant.
Fortunately there is a way to read current user’s principal which works both for v1 and v2. It is based on using special HTTP header X-MS-CLIENT-PRINCIPAL-NAME which contains user name (see Access user claims):
So we can read current user’s principal name in Azure function like this:
var headerValues = req.Headers.GetValues("X-MS-CLIENT-PRINCIPAL-NAME");
return headerValues.FirstOrDefault();
and after that perform necessary authorization checks.
Update 2018-12-28: above method with using HTTP headers works but it is possible to replace X-MS-CLIENT-PRINCIPAL-NAME header with other user id and perform calls from behalf of this user. Here is how you may get current user principal using object model:
In one of my previous posts I showed example how to create Azure AD groups with owners which were added right after group has been created: Create Azure AD group and set group owner using Microsoft Graph Client library. This approach works but on some tenants it may cause slowness and performance problems during group’s creation. You may have the following error when use this approach:
"code": "ResourceNotFound" "message": "Resource provisioning is in progress. Please try again."
This issue is also reported on github: After office 365 group is created, the group site provisioning is pending. Also if you will try to create group using PnP PowerShell or OfficeDevPnP library you may face with the same issue. PnP uses UnifiedGroupsUtility.CreateUnifiedGroup method to create groups. Let’s check it’s code:
public static UnifiedGroupEntity CreateUnifiedGroup(string displayName, string description, string mailNickname,
string accessToken, string[] owners = null, string[] members = null, Stream groupLogo = null,
bool isPrivate = false, int retryCount = 10, int delay = 500)
{
UnifiedGroupEntity result = null;
if (String.IsNullOrEmpty(displayName))
{
throw new ArgumentNullException(nameof(displayName));
}
if (String.IsNullOrEmpty(description))
{
throw new ArgumentNullException(nameof(description));
}
if (String.IsNullOrEmpty(mailNickname))
{
throw new ArgumentNullException(nameof(mailNickname));
}
if (String.IsNullOrEmpty(accessToken))
{
throw new ArgumentNullException(nameof(accessToken));
}
try
{
// Use a synchronous model to invoke the asynchronous process
result = Task.Run(async () =>
{
var group = new UnifiedGroupEntity();
var graphClient = CreateGraphClient(accessToken, retryCount, delay);
// Prepare the group resource object
var newGroup = new Microsoft.Graph.Group
{
DisplayName = displayName,
Description = description,
MailNickname = mailNickname,
MailEnabled = true,
SecurityEnabled = false,
Visibility = isPrivate == true ? "Private" : "Public",
GroupTypes = new List<string> { "Unified" },
};
Microsoft.Graph.Group addedGroup = null;
String modernSiteUrl = null;
// Add the group to the collection of groups (if it does not exist
if (addedGroup == null)
{
addedGroup = await graphClient.Groups.Request().AddAsync(newGroup);
if (addedGroup != null)
{
group.DisplayName = addedGroup.DisplayName;
group.Description = addedGroup.Description;
group.GroupId = addedGroup.Id;
group.Mail = addedGroup.Mail;
group.MailNickname = addedGroup.MailNickname;
int imageRetryCount = retryCount;
if (groupLogo != null)
{
using (var memGroupLogo = new MemoryStream())
{
groupLogo.CopyTo(memGroupLogo);
while (imageRetryCount > 0)
{
bool groupLogoUpdated = false;
memGroupLogo.Position = 0;
using (var tempGroupLogo = new MemoryStream())
{
memGroupLogo.CopyTo(tempGroupLogo);
tempGroupLogo.Position = 0;
try
{
groupLogoUpdated = UpdateUnifiedGroup(addedGroup.Id, accessToken, groupLogo: tempGroupLogo);
}
catch
{
// Skip any exception and simply retry
}
}
// In case of failure retry up to 10 times, with 500ms delay in between
if (!groupLogoUpdated)
{
// Pop up the delay for the group image
await Task.Delay(delay * (retryCount - imageRetryCount));
imageRetryCount--;
}
else
{
break;
}
}
}
}
int driveRetryCount = retryCount;
while (driveRetryCount > 0 && String.IsNullOrEmpty(modernSiteUrl))
{
try
{
modernSiteUrl = GetUnifiedGroupSiteUrl(addedGroup.Id, accessToken);
}
catch
{
// Skip any exception and simply retry
}
// In case of failure retry up to 10 times, with 500ms delay in between
if (String.IsNullOrEmpty(modernSiteUrl))
{
await Task.Delay(delay * (retryCount - driveRetryCount));
driveRetryCount--;
}
}
group.SiteUrl = modernSiteUrl;
}
}
#region Handle group's owners
if (owners != null && owners.Length > 0)
{
await UpdateOwners(owners, graphClient, addedGroup);
}
#endregion
#region Handle group's members
if (members != null && members.Length > 0)
{
await UpdateMembers(members, graphClient, addedGroup);
}
#endregion
return (group);
}).GetAwaiter().GetResult();
}
catch (ServiceException ex)
{
Log.Error(Constants.LOGGING_SOURCE, CoreResources.GraphExtensions_ErrorOccured, ex.Error.Message);
throw;
}
return (result);
}
As you can see it basically uses the same approach: at first creates group and then adds owners/members using UpdateOwners/UpdateMembers methods.
Workaround for this problem is to not use Graph API client library and use plain REST calls and special OData bind syntax for owners and members like described here: Create a Group in Microsoft Graph API with a Owner
This approach works i.e. groups are created with owners and members from beginning and you don’t have to call another methods to add them separately. But is it possible to do the same with Graph API .Net client library (it would be good because it is more convenient to use client library than raw REST calls). The answer is yes it is possible and below it is shown how to do it.
Need to say that if you use only Graph API .Net client library classes it is not possible to do it. If you check property Group.Owners you will see that it has IGroupOwnersCollectionWithReferencesPage type:
In Graph API library there is only one class which implements this interface GroupOwnersCollectionWithReferencesPage and you can’t create instance of this class with owners specified and pass to Group.Owners property – it has to be used with Groups[].Request.Owners.References when you read group owners with pagination. So my first attempt was to create custom class which inherits IGroupOwnersCollectionWithReferencesPage interface which would allow list of user in constructor and then pass it’s instance to Group.Owners property before creation:
public class LightOwners : CollectionPage<DirectoryObject>, IGroupOwnersCollectionWithReferencesPage
{
public LightOwners()
{
}
public LightOwners(List<User> owners)
{
if (owners != null)
{
owners.ForEach(o => this.Add(o));
}
}
public void InitializeNextPageRequest(IBaseClient client, string nextPageLinkString)
{
}
public IGroupOwnersCollectionWithReferencesRequest NextPageRequest { get; }
}
This approach didn’t work: group object was serialized to JSON when client library made POST request to https://graph.microsoft.com/v1.0/groups for creating the group with property “owners” and all users’ properties were serialized as well – while we need "owners@odata.bind" and "https://graph.microsoft.com/v1.0/users/{id1}" string instead of fully serialized user object.
After that I tried another approach which worked: at first created new class GroupExtended which inherits Group class from Graph API library:
public class GroupExtended : Group
{
[JsonProperty("owners@odata.bind", NullValueHandling = NullValueHandling.Ignore)]
public string[] OwnersODataBind { get; set; }
[JsonProperty("members@odata.bind", NullValueHandling = NullValueHandling.Ignore)]
public string[] MembersODataBind { get; set; }
}
As you can see it adds 2 new properties OwnersODataBind and MembersODataBind which are serialized to "owners@odata.bind" and "members@odata.bind" respectively. Then I modified UnifiedGroupsUtility.CreateUnifiedGroup method to create groups with owners and members from beginning using single API call instead of adding them after group was created:
public static UnifiedGroupEntity CreateUnifiedGroup(string displayName, string description, string mailNickname,
string accessToken, string[] owners = null, string[] members = null, Stream groupLogo = null,
bool isPrivate = false, int retryCount = 10, int delay = 500)
{
UnifiedGroupEntity result = null;
if (String.IsNullOrEmpty(displayName))
{
throw new ArgumentNullException(nameof(displayName));
}
if (String.IsNullOrEmpty(description))
{
throw new ArgumentNullException(nameof(description));
}
if (String.IsNullOrEmpty(mailNickname))
{
throw new ArgumentNullException(nameof(mailNickname));
}
if (String.IsNullOrEmpty(accessToken))
{
throw new ArgumentNullException(nameof(accessToken));
}
try
{
// Use a synchronous model to invoke the asynchronous process
result = Task.Run(async () =>
{
var group = new UnifiedGroupEntity();
var graphClient = CreateGraphClient(accessToken, retryCount, delay);
// Prepare the group resource object
var newGroup = new GroupExtended
{
DisplayName = displayName,
Description = description,
MailNickname = mailNickname,
MailEnabled = true,
SecurityEnabled = false,
Visibility = isPrivate == true ? "Private" : "Public",
GroupTypes = new List<string> { "Unified" }
};
if (owners != null && owners.Length > 0)
{
var users = GetUsers(graphClient, owners);
if (users != null)
{
newGroup.OwnersODataBind = users.Select(u => string.Format("https://graph.microsoft.com/v1.0/users/{0}", u.Id)).ToArray();
}
}
if (members != null && members.Length > 0)
{
var users = GetUsers(graphClient, members);
if (users != null)
{
newGroup.MembersODataBind = users.Select(u => string.Format("https://graph.microsoft.com/v1.0/users/{0}", u.Id)).ToArray();
}
}
Microsoft.Graph.Group addedGroup = null;
String modernSiteUrl = null;
// Add the group to the collection of groups (if it does not exist
if (addedGroup == null)
{
addedGroup = await graphClient.Groups.Request().AddAsync(newGroup);
if (addedGroup != null)
{
group.DisplayName = addedGroup.DisplayName;
group.Description = addedGroup.Description;
group.GroupId = addedGroup.Id;
group.Mail = addedGroup.Mail;
group.MailNickname = addedGroup.MailNickname;
int imageRetryCount = retryCount;
if (groupLogo != null)
{
using (var memGroupLogo = new MemoryStream())
{
groupLogo.CopyTo(memGroupLogo);
while (imageRetryCount > 0)
{
bool groupLogoUpdated = false;
memGroupLogo.Position = 0;
using (var tempGroupLogo = new MemoryStream())
{
memGroupLogo.CopyTo(tempGroupLogo);
tempGroupLogo.Position = 0;
try
{
groupLogoUpdated = UnifiedGroupsUtility.UpdateUnifiedGroup(addedGroup.Id, accessToken, groupLogo: tempGroupLogo);
}
catch
{
// Skip any exception and simply retry
}
}
// In case of failure retry up to 10 times, with 500ms delay in between
if (!groupLogoUpdated)
{
// Pop up the delay for the group image
await Task.Delay(delay * (retryCount - imageRetryCount));
imageRetryCount--;
}
else
{
break;
}
}
}
}
int driveRetryCount = retryCount;
while (driveRetryCount > 0 && String.IsNullOrEmpty(modernSiteUrl))
{
try
{
modernSiteUrl = UnifiedGroupsUtility.GetUnifiedGroupSiteUrl(addedGroup.Id, accessToken);
}
catch
{
// Skip any exception and simply retry
}
// In case of failure retry up to 10 times, with 500ms delay in between
if (String.IsNullOrEmpty(modernSiteUrl))
{
await Task.Delay(delay * (retryCount - driveRetryCount));
driveRetryCount--;
}
}
group.SiteUrl = modernSiteUrl;
}
}
// #region Handle group's owners
//
// if (owners != null && owners.Length > 0)
// {
// await UpdateOwners(owners, graphClient, addedGroup);
// }
//
// #endregion
// #region Handle group's members
//
// if (members != null && members.Length > 0)
// {
// await UpdateMembers(members, graphClient, addedGroup);
// }
//
// #endregion
return (group);
}).GetAwaiter().GetResult();
}
catch (ServiceException ex)
{
//Log.Error(Constants.LOGGING_SOURCE, CoreResources.GraphExtensions_ErrorOccured, ex.Error.Message);
throw;
}
return (result);
}
private static List<User> GetUsers(GraphServiceClient graphClient, string[] owners)
{
if (owners == null)
{
return new List<User>();
}
var result = Task.Run(async () =>
{
var usersResult = new List<User>();
var users = await graphClient.Users.Request().GetAsync();
while (users.Count > 0)
{
foreach (var u in users)
{
if (owners.Any(o => u.UserPrincipalName.ToLower().Contains(o.ToLower())))
{
usersResult.Add(u);
}
}
if (users.NextPageRequest != null)
{
users = await users.NextPageRequest.GetAsync();
}
else
{
break;
}
}
return usersResult;
}).GetAwaiter().GetResult();
return result;
}
private static GraphServiceClient CreateGraphClient(String accessToken, int retryCount = 10, int delay = 500)
{
// Creates a new GraphServiceClient instance using a custom PnPHttpProvider
// which natively supports retry logic for throttled requests
// Default are 10 retries with a base delay of 500ms
var result = new GraphServiceClient(new DelegateAuthenticationProvider(
async (requestMessage) =>
{
if (!String.IsNullOrEmpty(accessToken))
{
// Configure the HTTP bearer Authorization Header
requestMessage.Headers.Authorization = new AuthenticationHeaderValue("bearer", accessToken);
}
}), new PnPHttpProvider(retryCount, delay));
return (result);
}
And after that groups were created successfully with owners and members. At first code resolves specified users by their emails and then fills OwnersODataBind and MembersODataBind properties with strings like "https://graph.microsoft.com/v1.0/users/{id1}" (we need to resolve uses from Azure AD first in order to get their ids to build these strings). After that it creates group with single call and it contains specified owners and members. So this approach allows to create groups with owners and members set from beginning.
Function – a function-specific API key is required. This is the default value if none is provided.
Anonymous - no API key is required
Admin - the master key is required
More info about these keys can be found here. But how this chose affects Visual Studio project? There are number of files created after you click Ok on the above dialog window:
sln – solution file
csproj – project file
host.json – host settings file
local.settings.json – local app settings file
Function1.cs – code of Azure function
The difference is only in function code cs file (Function1.cs): different AuthorizationLevel values will be passed to HttpTrigger attribute when different access rights are chosen:
Other files are equal. Hope that this info will help to understand Azure functions project structure better.
As you probably know Sharepoint allows to export list content into RSS format. There may be one issue however: some items may not be shown in Firefox default RSS viewer although these items exist in page source (i.e. returned from server). If you will check browser console you may find the following error there:
Items which are not rendered correctly have enclosure tag while items which are rendered correctly don’t have it.
In order to fix this issue you may use the following workaround for Sharepoint on-premise: create ashx handler, put it to /Layouts subfolder, code of the handler will send internal http request to OTB /_layouts/15/listfeed.aspx?List={listId} url, then remove enclosure tag via Regex and return final result to the response (i.e. implement kind of proxy for OTB RSS feed):
string url = string.Format("{0}?List={1}", SPUrlUtility.CombineUrl(web.Url, "_layouts/15/listfeed.aspx"), list.ID);
var request = (HttpWebRequest)WebRequest.Create(url);
request.Credentials = CredentialCache.DefaultNetworkCredentials;
var response = (HttpWebResponse)request.GetResponse();
if (response.StatusCode == HttpStatusCode.OK)
{
using (var stream = response.GetResponseStream())
{
using (var reader = new StreamReader(stream))
{
result = reader.ReadToEnd();
}
}
result = Regex.Replace(result, @"<enclosure.+?/>", string.Empty);
}
As result there won’t be enclosure tag and RSS feed will be rendered correctly in Firefox.
Some time ago we faced with the following problem: let’s say we have PowerShell script which does some actions on modern Sharepoint Online site collection. Among with other actions it sets property bag values on the root site of the target site collection:
It returns control quite quickly. The problem however that on practice effect from setting DenyAddAndCustomizePages to false has delay. I.e. if you will try to set property bag value using cmdlet shown above immediately after you set DenyAddAndCustomizePages to false – there won’t be any errors but value won’t be really saved into property bag.
As workaround I’ve added 60 seconds delay to the script with UI indication (dots are printed to output each second until there won’t be 60 dots in line):
Write-Host "Wait some time before changes will take effect..."
$i = 0
do
{
Write-Host "." -NoNewLine
Start-Sleep -s 1
$i++
}
while ($i -lt 60)
Write-Host
After this delay property bag values were saved successfully.
If you try to delete alerts from the Sharepoint site programmatically:
SPWeb web = ...;
web.Alerts.Delete(alertId);
You may face with UnauthorizedAccessException:
<nativehr>0x80070005</nativehr><nativestack></nativestack> at Microsoft.SharePoint.SPGlobal.HandleUnauthorizedAccessException(UnauthorizedAccessException ex) at Microsoft.SharePoint.Library.SPRequest.DeleteSubscription(String bstrUrl, String bstrListName, String bstrSubId, Boolean bListItem, UInt32 ulItemId, Boolean bSiteAdmin, Int32 lUserId) at Microsoft.SharePoint.SPAlertCollection.Delete(Guid idAlert)
First thing to check is of course that account under which the code above is executed has all necessary permissions on the site. If this is the case but the problem is still there check that your site collection is not in readonly mode. You may do it using the following PowerShell command:
Get-SPSite -Id http://example.com | select ReadOnly,Readlocked,WriteLocked,LockIssue | ft -autosize
If site is in readonly mode result will look like this:
and if site is not readonly it will look like this:
You may unlock site collection using the following PowerShell command:
Modern Team and Communication sites can be created through PnP PowerShell cmdlet New-PnPSite. Here are examples how you may create these sites:
Team: New-PnPSite -Type TeamSite -Title Test -Alias test
Communication: New-PnPSite -Type CommunicationSite -Title Test -Url https://{tenant}.sharepoint.com/sites/test
Note that for Team site we use Alias parameter while for Communication site we use Url. It is mandatory to use relative url when you create Team site and absolute url when you create Communication site. If you will try to create Team site with absolute url:
Team: New-PnPSite -Type TeamSite -Title Test -Alias https://{tenant}.sharepoint.com/sites/test
you will get the following error:
Invalid value specified for property 'mailNickname' of resource 'Group'.
And if you will try to create communication site with relative url:
Communication: New-PnPSite -Type CommunicationSite -Title Test -Url test
there will be another error:
This operation is not supported for a relative URI.
(BTW with modern experience currently it is possible to create sub folders of only first level. If you want to create sub folders of deeper levels you have to switch to classic experience and create sub folder from there)
However if you will try to delete this folder you will get the following error:
Sorry, something went wrong
The server has encountered the following error(s):
Test
Access denied. You do not have permission to perform this action or access this resource.
In order to avoid this error you need to enable customizations of pages and scripts on the site. You may do it with the following PowerShell command:
There is number of differences between apps registered in these 2 portals – you may check them e.g. here: About v2.0. For this article let’s notice that apps registered in v2 may support both web app and native platforms while apps in v1 may be either web app or native but not both. If you need them both you have to register 2 apps in v1 portal.
Recently we faced with a problem of getting user token for MS Graph i.e. token based on user credentials. We used the following code for that and it works properly for the app registered in v2 portal with native platform support:
var credentials = new Microsoft.IdentityModel.Clients.ActiveDirectory.UserCredential("username", "password");
var token = Task.Run(async () =>
{
var authContext = new AuthenticationContext(string.Format("https://login.microsoftonline.com/{0}", "mytenant.onmicrosoft.com"));
var authResult = await authContext.AcquireTokenAsync("https://graph.microsoft.com", appId, credentials);
return authResult.AccessToken;
}).GetAwaiter().GetResult();
where for appId we used Azure AD app id registered in v2. When we tried to run the same code for the app registered in v1 portal with web app type the following error was shown:
Error: index was outside the bounds of the array
The same code also works properly for the app from v1 portal but with native type. I.e. it looks like AuthenticationContext.AcquireTokenAsync() method may fetch user token only for native app. If you know how to get user token for web app from v1 portal please share it in comments.
Some time ago I faced with interesting problem: when tried to get properties of Azure AD group using app token with app permissions (without available user context) through Graph API:
Investigation showed that problem was caused by unseencount property. When I tried to remove it – another selected property (visibility) was returned successfully:
What was even more stranger is that in Graph explorer it worked:
Communication with MS support both on forums (see Can't get group's unseenCount) and via Azure support ticket helped to figure out the reason of this problem: in Postman I used app token with app permissions, while in Graph explorer I was authenticated with my own account (see above). I.e. in Graph explorer delegated permissions were used. And there is known issue in MS Graph (see Known issues with Microsoft Graph): unseencount may be retrieved only using delegated permissions:
“Examples of group features that support only delegated permissions:
Group conversations, events, photo
External senders, accepted or rejected senders, group subscription
However currently if you will try to run it you will get the following 400 Bad request error:
{"error":{"code":"-1, Microsoft.OData.Core.ODataException","message":"The property '__metadata' does not exist on type 'SP.Social.SocialActorInfo'. Make sure to only use property names that are defined by the type."}}
The problem is with __metadata property of the actor which seems to be outdated nowadays. In order to avoid this error just remove or comment __metadata from actor object.
Another problem is related with odata=verbose specified in Accept header. If you will try to follow site with it API will return the following 406 Not acceptable error:
{"error":{"code":"-1, Microsoft.SharePoint.Client.ClientServiceException","message":"The HTTP header ACCEPT is missing or its value is invalid."}}
In order to resolve it change odata=verbose to odata.metadata=minimal in Accept header. Here is the working code written with Typescript and SPFx:
If you use structural navigation on your publishing Sharepoint site and navigation nodes are created as headings (NodeType = Heading), then it is quite straightforward to delete such navigation nodes programmatically. Here is how it can be done:
var web = ...
var pweb = PublishingWeb.GetPublishingWeb(web);
var globalNavigation = pweb.Navigation.GlobalNavigationNodes;
var nodePage = globalNavigation.Cast<SPNavigationNode>().FirstOrDefault(n => (n.Title == "Test"));
if (nodePage != null)
{
nodePage.Delete();
}
In this example we delete navigation node with title “Test”. However if you will try to delete navigation nodes which were created as AuthoredLink* (see NodeTypes Enum):
AuthoredLink
AuthoredLinkPlain
AuthoredLinkToPage
AuthoredLinkToWeb
using the same code you will find that link is not get deleted. The workaround is to change NodeType property first to Heading and then delete the node:
var web = ...
var pweb = PublishingWeb.GetPublishingWeb(web);
var globalNavigation = pweb.Navigation.GlobalNavigationNodes;
var nodePage = globalNavigation.Cast<SPNavigationNode>().FirstOrDefault(n => (n.Title == "Test" &&
(n.Properties != null && n.Properties["NodeType"] != null && n.Properties["NodeType"] is string &&
(n.Properties["NodeType"] as string == "AuthoredLink" || (n.Properties["NodeType"] as string).StartsWith("AuthoredLink"))));
if (nodePage != null)
{
nodePage.Properties["NodeType"] = "Heading";
nodePage.Update();
// reinitialize navigation nodes
pweb = PublishingWeb.GetPublishingWeb(web);
globalNavigation = pweb.Navigation.GlobalNavigationNodes;
nodePage = globalNavigation.Cast<SPNavigationNode>().FirstOrDefault(n => (n.Title == "Test);
if (nodePage != null)
{
nodePage.Delete();
}
}
After that AuthoredLink navigation node will be successfully deleted.
In one of my previous posts I wrote how to create modern Sharepoint sites: How to create modern Team or Communication site in Sharepoint (quite basic post but necessary if you just stated to work with modern sites). In this article we will continue exploring modern sites and will see how to get list of modern Team or Communication sites using Search API. Using of Search API is preferable in many scenarios as you have all sites at once with single API call.
Let’s create modern Team site and explore it’s Site Pages doclib. In default list view let’s add additional column “Content Type” to see what content type is used for the default front page:
As you can see it uses “Site Page” content type. Modern Communication site also uses the same content type for front page.
So in order to get list of all modern sites we may query for pages created with “Site Page” content type. In order to make our search query language-independent we will use content type id instead of the name. In order to get it go to Site Pages doclib settings and click Site Page content type. Then copy content type id from query string. You will have something like that:
0x0101009D1CB255DA76424F860D91F20E6C4118…
where the rest will be unique for your doclib. Our query string will look like that then:
which means return all pages which content type starts with specified id. If we will test it in the Search Query Tool we will have list of all modern Team and Communication sites in the tenant:
In order to get distinct list don’t forget to check “Trim duplicates” option and select at least Title and SPWebUrl managed properties which contain site title and url.
Sometimes you need to get list of available operations in the Sharepoint REST API endpoint. Let’s say we want to check operations available for /_api/SitePages endpoint.
First of all we need to get authentication cookies. In order to get them lunch Fiddler and open e.g. Sharepoint landing page (/_layouts/15/Sharepoint.aspx) which is opened from App launcher > Sharepoint. On this page there will be several REST API calls which will contain Cookies header. E.g. /_api/GroupSiteManager/CanUserCreateGroup:
From this Fiddler view copy value of Cookie header.
After that launch Postman and create request on endpoint in question: /_api/SitePages. In Headers section add Cookie, put value copied from Fiddler and click Send:
In the response it will return list of relative endpoint operations available under selected endpoint. In this example they are:
/_api/SitePages/CommunicationSite
/_api/SitePages/Pages
/_api/SitePages/PublishingSite
Note that this method doesn’t return POST/PUT endpoints unfortunately.
Modern Team/Communication sites are not displayed on the classic create site page in Sharepoint together with other “classic” web templates. In order to create them you need first click App launcher icon in top left corner and choose Sharepoint link there:
It will open Sharepoint landing page which will have Create site icon on the top:
After clicking on this icon you will be able to choose which modern site to create: Team or Communication:
And on the last step you have to specify site name, privacy and classification (later one is shown if classifications are configured for your tenant):
After clicking Next your modern Team or Classification site will be created.
If you work with SPFx and e.g. implement web part which uses react/redux for its components then you may face with need to combine multiple async redux actions into single one. Let’s assume that we have components which shows O365 groups and has appropriate properties and actions for it:
import * as React from 'react';
import { connect } from 'react-redux';
import { IGroup } from 'IGroup';
import * as GroupActions from 'groupActions';
export interface GroupListProps {
groups: IGroup[];
actions: {
getGroups: GroupActions.IGetGroups,
};
}
class GroupList extends React.Component<GroupListProps, {}> {
public componentWillMount(): void {
if (!this.props.groups) {
this.props.actions.getGroups();
}
}
public render() {
return (
this.props.groups ? <GroupsList items={this.props.groups} /> : <LoadingSpinner />
);
}
}
const mapStateToProps = (state: GroupListState) => ({
groups: state.groups,
});
const mapDispatchToProps = (dispatch: Dispatch<any>) => ({
actions: {
getGroups: () => dispatch(GroupActions.getGroupsImpl()),
}
});
/**
* Connecting the store to the component
*/
export default connect(
mapStateToProps,
mapDispatchToProps
)(GroupList);
After that we decide to show both regular Sharepoint sites together with groups. We need to action for getting sites themselves (asynchronously using promises) and another action for combining groups and sites into single list. Both actions ae asynchronous. Let’s see how we may combine them to single one.
At first start with adding new action for sites:
import * as React from 'react';
import { connect } from 'react-redux';
import { IGroup } from 'IGroup';
import { ISite } from 'ISite';
import * as GroupActions from 'groupActions';
import * as SiteActions from 'siteActions';
export interface GroupListProps {
groups: IGroup[];
sites: ISite[];
actions: {
getGroups: GroupActions.IGetGroups,
getSites: SiteActions.IGetSites,
};
}
class GroupListContainer extends React.Component<GroupListProps, {}> {
public componentWillMount(): void {
if (!this.props.groups) {
this.props.actions.getGroups();
}
if (!this.props.sites) {
this.props.actions.getSites();
}
}
public render() {
return (
this.props.groups ? <GroupsList items={this.props.groups} /> : <LoadingSpinner />
);
}
}
const mapStateToProps = (state: GroupListState) => ({
groups: state.groups,
sites: state.sites
});
const mapDispatchToProps = (dispatch: Dispatch<any>) => ({
actions: {
getGroups: () => dispatch(GroupActions.getGroupsImpl()),
getSites: () => dispatch(SiteActions.getSitesImpl()),
}
});
/**
* Connecting the store to the component
*/
export default connect(
mapStateToProps,
mapDispatchToProps
)(GroupList);
Then create new combined action which calls 2 other actions (getGroups and getSites) via dispatch and additional action which will combine 2 groups and sites to single list:
import * as React from 'react';
import { connect } from 'react-redux';
import { IGroup } from 'IGroup';
import { ISite } from 'ISite';
import { IGroupOrSite } from 'IGroupOrSite';
import * as GroupActions from 'groupActions';
import * as SiteActions from 'siteActions';
import * as GroupOrSiteActions from 'groupOrSiteActions';
export interface GroupListProps {
groups: IGroup[];
sites: ISite[];
groupsOrSites: IGroupOrSite[];
actions: {
getGroupsAndSites: GroupOrSiteActions.IGetGroupsAndSites
combineGroupsAndSites: GroupOrSiteActions.ICombineGroupsAndSites
};
}
class GroupListContainer extends React.Component<GroupListProps, {}> {
public componentWillMount(): void {
if (!this.props.groupsOrSites) {
this.props.actions.getGroupsAndSites();
}
}
public componentDidUpdate(prevProps: GroupListProps) {
// Combine groups and sites
if ((!isEqual(this.props.groups, prevProps.groups) || !isEqual(this.props.sites, prevProps.sites)) &&
this.props.groups && this.props.sites) {
this.props.actions.combineGroupsAndSites(this.props.groups, this.props.sites);
}
}
public render() {
return (
this.props.groupsOrSites ? <GroupsList items={this.props.groupsOrSites} /> : <LoadingSpinner />
);
}
}
const mapStateToProps = (state: GroupListState) => ({
groups: state.groups,
sites: state.sites,
groupsOrSites: state.groupsOrSites
});
const mapDispatchToProps = (dispatch: Dispatch<any>) => ({
actions: {
getGroupsAndSites: () => {
dispatch(GroupActions.getGroupsImpl());
return dispatch(SiteActions.getSitesImpl());
},
getGroupsAndSites: (groups: IGroup[], sites: ISite[]) => dispatch(GroupOrSiteActions.combineSitesImpl(groups, sites))
}
});
/**
* Connecting the store to the component
*/
export default connect(
mapStateToProps,
mapDispatchToProps
)(GroupList);
As result when you will call this combined action it will perform 2 async sub actions.
Some time ago I wrote about very convenient Chrome extension SP Editor (see this post: SP Editor Chrome extension: free open source alternative to Sharepoint Designer). It allows to perform many operations with your Sharepoint Online or on-prem site right in the browser without running any scripts or installing additional tools. However like usual Chrome extension it is disabled by default in private (incognito) Chrome mode. This is not very convenient because when you work with Sharepoint Online you often need to login to different sites with different accounts and incognito mode is often used for that.
The good thing is that it is quite easy to enable SP Editor for incognito mode: go to Chrome menu > More tools > Extensions > SP Editor > Details. In opened window click “Allow in incognito”:
After that extension will become available in private mode (you will need to re-open F12 developer tools to see SharePoint tab from SP Editor there).
Suppose that we have web site which shows information from multiple RSS feeds. In this example I will use Sharepoint site which consumes RSS feeds using standard RSS-viewer web part but it also can be site on any other technology:
In this case if external provider will decide to change RSS url – connection to your site will be also broken. And if you showed it in many places you will need to go through all of them and fix one by one:
In case of Sharepoint this may be real problem because you will need to go through all sub sites, locate all RSS-viewer web parts which consume broken RSS feed, edit page and fix RSS feed in web part properties.
It would be better if our site would be decoupled from external RSS urls via some intermediate reverse proxy: in this case on our site we would use proxy urls instead of real urls and in proxy we would just define mapping between proxy url and real RSS url:
In this case if url of one of RSS feeds will be changed we will only need to change appropriate mapping between proxy url and RSS url on proxy level – much simpler than go through the site and fix all places where RSS feed is used.
Such RSS reverse proxy can be configured using IIS reverse proxy: we will need Application request routing (AAR) and URL rewrite IIS modules installed. After installing AAR go to proxy settings (IIS manager > Server > Application request routing cache > Server proxy settings) and enable proxy there:
After that create new site in IIS (in Sharepoint instead of new site you may also create sub folder under /_layouts virtual folder and use it as proxy url. In this case you will need to define url rewrite rules on this sub folder level i.e. not on the root site level – see below), define its url via binding (suppose that it will be http://myproxy.com) or define port and add the following rewrite rule:
Here we tell IIS that all requests which come to http://myproxy.com/example should be rewritten to the real RSS feed url https://example.com/feed (this is not example of really working RSS feed – just for example). Now if external RSS feed’s url https://example.com/feed will be changed – you will only need to go there and change it in the URL rewrite rule. Your site still will use http://myproxy.com/example in all places and won’t require changes.
As you probably know MS changed MVP award renewal dates and now renewal emails are coming 1th July. I’ve got the following exciting email yesterday: Congratulations 2018-2019 Microsoft MVPs. This is my 8th award and I’m very happy to be part of the great community. In my regular work I try to participate in Sharepoint and Office 365 communities and help people to solve technical problems. Big thanks to MS for recognizing these efforts. In last few years focus has been moved from on-prem Sharepoint to Sharepoint Online, Office 365 and Azure. This is very interesting to observe how these new platforms and services grow and evolve interacting with each other. And even more interesting to take part in this process by contributing to community life and communicating with MS product teams. Thanks to all readers of my blog and see you with new challenges and solutions :)
If you develop Sharepoint Framework web parts (SPFx web parts) you probably familiar with one part of it’s development process – local web server which is launched with gulp:
gulp serve –nobrowser
It launches local web server on localhost:port which allows you to test SPFx web part on own dev environment (not release version) which consumes js and css files directly from this localhost:port address. So you may modify them and reload the page without redeploying app package to App catalog (which is time consuming if you develop js/css).
However sometimes local web server hangs without visible reasons. In order to avoid it you may stop it (Ctrl-C) and run again – but it also takes time and not always help. If you faced with this problem go to cmd window with launched gulp serve and try click Esc several times. If after that it will start to show output like this: