Tuesday, September 22, 2020

How to enable iOS universal links in web-facing Sharepoint on-prem site

Universal links allow to open mobile app when user clicks it inside the site. I.e. if user has appropriate mobile app it will be opened instead of regular browser page which will give better user experience on mobile devices. In this post I will show how to configure universal links for iOS in web-facing Sharepoint on-premise site. At the same time I won’t write about all steps like building mobile app itself, getting its app id, etc – please refer to appropriate resources related with mobile apps development for that.

In order to enable universal links for iOS on your web-facing Sharepoint on-prem site the following steps have to be done:

  1. create AASA (Apple app site association) file. File name should be exactly “apple-app-site-association” without extension. It should contain data in json format with predefined structure (see below)
  2. put it to the root folder of web-facing site
  3. ensure that AASA file is available for anonymous users (robots) with Content-Type = “application/json”

This is how AASA file may look like:

{
  "applinks": {
    "apps": [],
    "details": [ { "appID": "{appId}", "paths": ["*"] } ]
  }
}

(use you app id instead of {appId} placeholder). When file is ready we need to copy it to the IIS folder of our Sharepoint site. Next step is to provide anonymous access and set correct content type. It can be done by modifying web.config of Sharepoint site – we need to add new location element here under configuration tag (usually in web.config of Sharepoint site there are many others location elements – so just put them together):

 <location path="apple-app-site-association">
    <system.web>
      <authorization>
        <allow users="*" />
      </authorization>
    </system.web>
    <system.webServer>
      <staticContent>
        <mimeMap fileExtension="." mimeType="application/json" />
      </staticContent>
    </system.webServer>
  </location>

It will make AASA file available for anonymous users and will set content type to “application/json”.

The last step is to verify that everything is configured properly. It can be done on AASA validator. Result should look like this:

At the end you should have universal links enabled for your web-facing Sharepoint on-prem site.


Monday, September 21, 2020

Call AAD secured Azure functions from Postman

It is often needed to test our AAD secured Azure functions from client applications like Postman. In one of my previous articles I showed how to call secured Azure functions from C# and PowerShell (see Call Azure AD secured Azure functions from C#). Let’s see now to to call them from Postman app – in this case we won’t need to write any code for testing our functions.

First of all we need to obtain access token. If app permissions are used we may use the same approach as in article mentioned above. For delegated user permissions you may go to SPFx page which calls Azure function, open browser Network tab, find call to some Azure function there and copy access token from Authorization header (value after “Bearer” part).

After that go to Postman and create new request using url of Azure function (it can be copied from Azure portal > Functions app) and correct http verb. On Autorization tab set Type = “Bearer Token” and copy access token to the Token field:

After that on Headers tab set correct content type if needed (e.g. application/json):

Then press Send. After that we should get results from AAD secured Azure function in Postman.

Wednesday, September 9, 2020

How to identify whether SPFx web part is running in web browser or in Teams client

As you probably know it is possible to add Sharepoint Online page as a tab to Team’s channel so it will be shown inside Teams: both when you access it from web browser via https://teams.microsoft.com or from native client (desktop or mobile). It may be needed to identify from where exactly Teams are accessed in order to provide better user experience for this particular client (e.g. add extra css, use different caching mechanisms, etc). In order to do that we may inspect User-Agent header (navigator.userAgent in Typescript) for different clients. In the following table I summarized values of User-Agent header for mentioned scenarios:

Accessed fromUser agent

Desktop browser (Chrome)

Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.83 Safari/537.36

Mobile browser (Chrome on Android) Mozilla/5.0 (Linux; Android 10; …) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.81 Mobile Safari/537.36
Teams web client in desktop browser (Chrome) Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.83 Safari/537.36 SPTeamsWeb
Teams web client in mobile browser (Safari on iPad). Chrome mobile browser on Android and Safari on iPhone are not supported browsers for Teams web client

Mozilla/5.0 (Macintosh; Intel Mac OS X …) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.1.1 Safari/605.1.15 SPTeamsWeb

Teams native desktop client

Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Teams/1.3.00.21759 Chrome/69.0.3497.128 Electron/4.2.12 Safari/537.36

Teams native mobile client (Android)

Mozilla/5.0 (Linux; Android 10; …) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/85.0.4183.81 Mobile Safari/537.36 TeamsMobile-Android

Teams native mobile client (iPhone)

Mozilla/5.0 (iPhone; CPU iPhone OS … like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148 TeamsMobile-iOS

Teams native mobile client (iPad)

Mozilla/5.0 (iPad; CPU OS … like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148 TeamsMobile-iOS

Those parts which allow to identify current client type are highlighted by bold font. I.e. if we have TeamsMobile-Android or TeamsMobile-iOS in User-Agent it means that SPFx is running in Teams native mobile client. If we have SPTeamsWeb – web part is running in web client.

Wednesday, September 2, 2020

Redirect traffic from single site collection in Sharepoint on-prem via URL rewrite IIS module

Suppose that you have Sharepoint on-prem web application with multiple site collections which have own urls and need to perform maintenance work on one of these site collections so others will remain working. In order to do that you may temporary redirect traffic from this site collection to external page (e.g. to maintenance page hosted outside of this site collection). Technically it can be done by adding URL rewrite rule to Sharepoint web app in IIS. Rule will look like this:

This rule will redirect traffic from site collection with url /test to external site http://example.com. This is the same rule in xml which is added to web.config of Sharepoint web app:

<rewrite>
  <rules>
	<rule name="MyRule" enabled="true" stopProcessing="true">
	  <match url="test/*" />
	  <action type="Redirect" url="http://example.com" appendQueryString="false" redirectType="Found" />
	</rule>
  </rules>
</rewrite>

Note that if you perform temporary maintenance work on site collection then Redirect type should be set to Found (302) like shown above and not to Permanent (301). In case of permanent redirect browser will cache response and will redirect users from this site collection even after maintenance will be completed and rule will be removed from the site.

Friday, August 14, 2020

Camlex 5.2 and Camlex.Client 4.1: support for Membership and OffsetDays

Good news for Sharepoint developers who use Camlex library: new versions are released both for server and client object models. For client object model separate packages are available for Sharepoint online and on-premise: https://www.nuget.org/account/Packages:

Package Version Description
Camlex.NET.dll 5.2.0 Server object model (on-prem)
Camlex.Client.dll 4.1.0 Client object model (SP online)
Camlex.Client.2013 4.1.0 Client object model (SP 2013 on-prem)
Camlex.Client.2016 4.1.0 Client object model (SP 2016 on-prem)
Camlex.Client.2019 4.1.0 Client object model (SP 2019 on-prem)

In this release 2 new features were added: support for Membership element (many developers requested it so finally it is out) and OffsetDays. Main credits for this release go to Ivan Russo who made actual work with very good quality so I only needed validate PR, accept it and add few more unit tests. Then I merged new features to the client branch which keeps source code for client object model version (nowdays it is done manually as there are quite many changes between server and client object model versions).

1. Membership element support

Membership element allows to create CAML queries using different membership conditions. The following examples show how it can be done now with Camlex:

// SPWeb.AllUsers
var caml = Camlex.Query().Where(x => Camlex.Membership(
	x["Field"], new Camlex.SPWebAllUsers())).ToString();
var expected =
	"  " +
	"    " +
	"      " +
	"    " +
	"  ";

// SPGroup
var caml = Camlex.Query().Where(x => Camlex.Membership(
	x["Field"], new Camlex.SPGroup(3))).ToString();
var expected =
	"  " +
	"    " +
	"      " +
	"    " +
	"  ";

// SPWeb.Groups
var caml = Camlex.Query().Where(x => Camlex.Membership(
	x["Field"], new Camlex.SPWebGroups())).ToString();
var expected =
	"  " +
	"    " +
	"      " +
	"    " +
	"  ";

// CurrentUserGroups
var caml = Camlex.Query().Where(x => Camlex.Membership(
			x["Field"], new Camlex.CurrentUserGroups())).ToString();
var expected =
	"  " +
	"    " +
	"      " +
	"    " +
	"  ";

// SPWeb.Users
var caml = Camlex.Query().Where(x => Camlex.Membership(
	x["Field"], new Camlex.SPWebUsers())).ToString();
var expected =
	"  " +
	"    " +
	"      " +
	"    " +
	"  ";

2. OffsetDays support

OffsetDays attribute is used with Today element and allows to add or subtract number of days from today for constructing dynamic CAML queries based on current datetime offset. The following example shows how it can be done with Camlex:

string caml = Camlex.Query().Where(x =>
	x["Created"] > ((DataTypes.DateTime)Camlex.Today).OffsetDays(-1)).ToString();
const string expected =
	"  " +
	"    " +
	"        " +
	"        " +
	"           " +
	"        " +
	"    " +
	"  ";

Thank you for using Camlex and stay tuned for the further improvements.

Thursday, July 16, 2020

One problem with caching old web job binaries after Azure web job update with WAWSDeploy

WAWSDeploy is convenient console utility which allows to deploy a folder or a zip file to an Azure Website using WebDeploy (see https://github.com/davidebbo/WAWSDeploy). Using this tool it is possible to automate tasks in your Azure deployment. E.g. you may upload Azure web job binaries (they should be packaged to zip file for that). However you should be aware of one potential problem.

Triggered web job binaries are uploaded to "app_data/jobs/triggered/{JobName}" path (full path will be /site/wwwroot/app_data/jobs/triggered/{JobName}). You may connect to your Azure App Service via FTP (connection settings may be copied from Azure portal > App service > Deployment center > FTP) and check that your web jobs folders are there

Note that you need to use FTP protocol (not FTPS) if you will connect e.g. via WinSCP tool.

The problem is the following: you may create zip package with web job binaries just by selecting all files for web jobs and adding them to archive – in this case there won’t be root folder in archive and all files will be located inside archive root. Alternatively you may select their parent folder (e.g. bin/Release) and add whole folder to archive – in this case archive will contain Release root folder.

So if 1st time you created archive using 1st method (by archiving files), then created another archive using 2nd method (by archiving folder) and uploaded this 2nd zip archive via WAWSDeploy – it will just create subfolder Release inside app_data/jobs/triggered/{JobName}. I.e. there will be 2 versions of web job: one in the app_data/jobs/triggered/{JobName} and another in app_data/jobs/triggered/{JobName}/Release. Azure runtime will use 1st found exe file which will be exe file from the root (not from Release sub folder) and will use that for running web job.

One way to avoid this problem is to always delete existing web job before to update it (you may check how to delete web job via PowerShell here: How to remove Azure web job via Azure RM PowerShell). In this case old binaries will be deleted so selected archiving method won’t cause problems during update.

Wednesday, July 15, 2020

How to remove Azure web job via Azure RM PowerShell

Azure web jobs allow to run your code by scheduler. Schedule is configured via CRON expression. This is convinient tool to perform repeated actions. Sometimes we need to remove them via PowerShell in automation process. In this article I will show how to remove Azure web job using Azure RM PowerShell cmdlets.

If you will try to google this topic you will most probably find Remove-AzureWebsiteJob cmdlet. The problem however is that this cmdlet is used for classic resources and you will most probably have problems using it (e.g. because of error Select-AzureSubscription -Default). So we need some other way to remove Azure web jobs – preferably by using Azure RM cmdlets. Fortunately it is possible – we may remove web job by using another cmdlet Remove-AzureRmResource. Here is syntax for removing triggered web job:

$resourceName = $appServiceName + "/" + $webJobName
Remove-AzureRmResource -ResourceGroupName $resourceGroupName -ResourceName $resourceName -ResourceType microsoft.web/sites/triggeredwebjobs

The important thing to note here is how ResourceName param is specified for web job (app service name/web job name) and ResourceType which is set to microsoft.web/sites/triggeredwebjobs. If you need to delete continuous web job you need to use microsoft.web/sites/continuouswebjobs resourceType.

Using this command you will be able to delete Azure web jobs via Azure RM PowerShell.

Monday, July 6, 2020

Run Azure function as Web job without code change

Sometime it may be needed to test existing Azure function as Web job (e.g. if you decide to move some long-running process from AF to web job). In order to do that you need to create App service first. After that create new console app project in Visual Studio, reference your Azure functions project as it would be regular class library and from Program.cs call necessary Azure function (it is possible since AFs are created as static methods of the class).

The problem however is that if you read some app settings in Azure function code – you most probably do it like that:

string foo = Environment.GetEnvironmentVariable("Foo", EnvironmentVariableTarget.Process);

But if you will just copy AF app settings to appSettings section of app.config of the new console project then above code will return empty string for any app setting. So in order to call our Azure function from console application and run it as web job we need to perform additional step: iterate through all app settings and fill environment variables for the process:

var appSettings = ConfigurationManager.AppSettings;
foreach (var key in appSettings.AllKeys)
{
 Environment.SetEnvironmentVariable(key, appSettings[key],
  EnvironmentVariableTarget.Process);
}

After that app settings will be successfully read and Azure function may run as web job without changing its code.

Thursday, July 2, 2020

Sharepoint MVP 2020

Yesterday I got email from MS about my MVP status renewal. This is 10th award for me in a row and I would like to say thanks to MS and to whole community which gives me motivation and energy to continue sharing my experience and solutions from every-day work:

Sharepoint and MS ecosystem in general evolves: now we switched to Modern Sharepoint, tight integration with Azure which becomes more and more powerful. New technologies require new skills and experience so I think it is crucial that we will continue sharing our knowledge with colleagues all over the world. Global pandemic which we faced with this year introduced new challenges for IT world: tools which we build became as crucial as never before since people switched to remote work. Performance and stability went to the 1st plan. From this perspective it is very important to take part in IT community life by answering forum questions, contributing to open source software, speaking on events, writing technical articles and so on. Hope that my contributing helped someone and would like to say thanks one more time to MS for recognizing it.

Monday, June 29, 2020

Problem with ReSharper unit test runner with NUnit 3.12: unit test is inconclusive

Some time ago we switched to NUnit 3.12 and NUnit3TestAdapter in Camlex library and noticed the following problem with ReSharper unit test runner: all unit tests were marked as “inconclusive” and didn’t run. At the same time built-in Visual Studio test runner worked. I used 2019 version of ReSharper when found this problem.

By searching the web I found similar problems from other people in JetBrains support site. In order to fix it I uninstalled 2019 version and installed latest available version of ReSharper JetBrains.ReSharperUltimate.2020.1.3. After that test become working again:

So if you will face with the same problem try to install last ReSharper version.

Friday, June 12, 2020

Problem with threads count grow when use OfficeDevPnP AuthenticationManager

Recently we faced with the following problem in our Azure function app: after some time of functioning Azure functions got stuck and the only way to make them work again was to restart them manually from Azure portal UI. Research showed that problem was related with threads count: because of some reason threads count permanently grew to 6K thread after which Azure function app became unresponsive:

We reviewed our code, made some optimizations, change some async calls to sync equivalents, but it didn’t fix the problem. Then we continued troubleshooting and found the following: in the code we used OfficeDevPnP AuthenticationManager in order to get access token for communicating with Sharepoint Online like that:

using (var ctx = new OfficeDevPnP.Core.AuthenticationManager().GetAppOnlyAuthenticatedContext(url, clientId, clientSecret))
{
    ...
}

Note that in this example new instance of AuthenticationManager is created each time when above code was executed. In our case it was inside queue-triggered Azure function which was called quite frequently. Elio Struyf (we worked over this problem together so credits for this finding go to him Smile) tried to make AuthenticationManager static – i.e. only one instance was created for the whole Azure function worker process:

public static class Helper
{
 private static OfficeDevPnP.Core.AuthenticationManager authMngr = new OfficeDevPnP.Core.AuthenticationManager();

 public static void QueueTriggeredFunc()
 {
  using (var ctx = authMngr.GetAppOnlyAuthenticatedContext(url, clientId, clientSecret))
  {
  ...
  }
 }
}

After that threads count got stabilized with avg 60 threads:

We reported this problem to OfficeDevPnP team and they confirmed that there is problem in AuthenticationManager which internally creates thread for monitoring access token’s lifetime, but this thread is not released when AuthenticationManager got recycled by garbage collector (also need to mention that AuthenticationManager itself was not disposable at the moment when this problem was found. We used OfficeDevPnP 3.20.2004.0 when found this problem). For now this workaround with making AuthenticationManager static was Ok and as far as I know currently OfficeDevPnP team works over correct fix for this problem. So hopefully it will be available soon globally.

Update 2020-06-15: from latest news I've got (thanks to Yannick Plenevaux :) ) OfficeDevPnP team addressed this issue in 202006.2 version and made AuthenticationManager disposable. So you may need to review your code in order to add using() around it.

Monday, June 8, 2020

Fix AccessToKeyVaultDenied error in Azure App service app settings which use KeyVault reference

Azure KeyVault is convenience safe storage for secrets (passwords, keys, etc.) which can be used in your apps instead of storing them in plain text in app settings. It adds number of advantages like access control, expiration policies, versioning, access history and others. However it has some gotchas which you should be aware. In this post I will describe one such gotcha.

First of all you need to configure access to key vault secret so your App service or Azure function will be able to read values from there. You may check how to do that e.g. here: Provide Key Vault authentication with an access control policy. After that go to App service > Configration and click Edit icon for the app setting which uses reference on KeyVault secret and check that there are no errors. Because even if you did everything correctly you may see Status = AccessToKeyVaultDenied and the following error description:

Key Vault reference was not able to be resolved because site was denied access to Key Vault reference's vault

In order to fix it try the following workaround:

  1. Delete app setting from UI
  2. Save changes
  3. Add the same app setting with KeyVault reference (i.e. with @Microsoft.KeyVault(SecretUri=…))
  4. Save changes again

After that if permissions are configured properly Status should be changed to Resolved:

and your app should be able to successfully resolve secret from KeyVault reference.

Monday, June 1, 2020

Upload WebJob to Azure App service with predefined schedule

As you probably know in Azure Portal it is possible to create Azure web jobs with Type = Triggered and specify CRON expression which will define schedule of the job. In this post I will describe how to create Azure web job with predefined CRON expression.

Before to start we need to create Azure app service in the tenant and prepare zip file with executable file for web job itself. Usually it can be created by archiving bin/Release (or bin/Debug) folder of the console project with web job logic. However there is one important detail: in order to set schedule of web job we need to add special file to the project called settings.job and define CRON expression inside this file. E.g. if we want our job to run every 15 minutes we need to define it like this:

{“schedule”: “* */15 * * * *”}

After that also go to File properties in Visual Studio and set "Copy to target directory value to “Copy Always”. After that compile project and create zip file from bin/Release (or bin/Debug) folder.

Next step is to create web job from zip file. It can be done by the following convenient command line util available on github: WAWSDeploy. With this tool web job can be created created e.g. from PowerShell by running the following command:

WAWSDeploy.exe MyPackage.zip publishSettings.txt /t "app_data\jobs\triggered\MyJob"

In this command publishSettings.txt is the file with publishing settings for your App service. It can be created by Get-AzureRmWebAppPublishingProfile cmdlet. This command will create new web job called MyJob based on provided zip file with predefined schedule.

Tuesday, May 19, 2020

Fix javascript heap out of memory error when running local SPFx web server via gulp

If you develop SPFx web part for Sharepoint and run local web server on dev env:

gulp serve –nobrowser

then you may face with famous “JaavaScript heap out of memory” error:

In order to avoid this error we need to increase nodejs heap size. On Windows it can be done by the following method:

  1. Run the following command in your terminal client:
    setx -m NODE_OPTIONS "--max-old-space-size=8192"

    (after that NOTE_OPTIONS should be added to PC > Advanced properties > Environment variables)
  2. Restart terminal session which is used for running gulp to ensure that new env variables will be applied.
  3. Run web server:
         gulp serve --nobrowser

After that your solution should be built without heap out of memory error.

Monday, May 18, 2020

Query limits in Azure Table storage API

As you probably know Table API used by Azure table storage is limited by the following operators which can be used in queries:

  • =
  • <>
  • >
  • <
  • <=
  • >=

We needed to run analogue of NOT IN Sql operator against RowKey column:

RowKey NOT IN (id1, id2, …, id_n)

where these ids were guids represented by strings. In order to do that with mentioned operators we needed to use not equal operator (<>) and construct query like that:

RowKey <> id1 AND RowKey <> id2 AND … RowKey <> id_n

The problem is that this list of guids didn’t have fixed size and could be potentially big. So it was interesting is there limit for max query length and if yes – what is exact value of this maximum query length.

For testing I created test app and generated conditions over guids like that (in addition to mentioned conditions on RowKey we needed to get groups from particular partition only – so this extra condition was added there as well. Exact partition name is not important – we only need to know that its length was 8 symbols – it will be used for overall query length calculation below):

var tableClient = storageAccount.CreateCloudTableClient(new TableClientConfiguration());

var table = tableClient.GetTableReference("MyTable");
var queryConditions = new List();
queryConditions.Add(TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, "Test1234"));
int numOfConditions = 50;
for (int i = 0; i < numOfConditions; i++)
{
 queryConditions.Add(TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.NotEqual, Guid.NewGuid().ToString().ToLower()));
}
string filter = CombineUsingAnd(queryConditions);
var query = new TableQuery().Where(filter);

var watch = System.Diagnostics.Stopwatch.StartNew();
var result = table.ExecuteQuery(query).ToList();
watch.Stop();


public static string CombineUsingAnd(List queryConditions)
{
 if (queryConditions == null || queryConditions.Count == 0)
 {
  return string.Empty;
 }
 string filter = string.Empty;
 foreach (string queryCondition in queryConditions)
 {
  if (string.IsNullOrEmpty(queryCondition))
  {
   continue;
  }

  if (string.IsNullOrEmpty(filter))
  {
   filter = queryCondition;
  }
  else
  {
   filter = TableQuery.CombineFilters(filter, TableOperators.And, queryCondition);
  }
 }

 return filter;
}

E.g. here how query looks like with condition on 50 guids:

((((((((((((((((((((((((((((((((((((((((((((((((((PartitionKey eq 'Teamwork') and (RowKey ne '60712ffe-061b-4a46-8402-01381177c4da')) and (RowKey ne '85694313-2776-485c-833d-ae78c185adba')) and (RowKey ne 'dac6f7fd-1198-486c-b3ea-e90337ee9711')) and (RowKey ne '5faf85cb-4662-4843-87f6-ac6dc20b04e1')) and (RowKey ne '8f524940-9822-46e1-b961-59a2e8fa51b4')) and (RowKey ne 'bacd765e-64d3-464a-a57e-bb30eeca0c2f')) and (RowKey ne '75ba6126-f529-4b30-ae41-882f2b223bff')) and (RowKey ne '5e9950ac-e611-4d58-8c91-8fcf16f2886a')) and (RowKey ne '8f08bd7b-7d6e-4c21-8b94-d5ca2a15f849')) and (RowKey ne 'a965f6f6-f03c-4b4a-91be-98481a35ba11')) and (RowKey ne '5858de75-addb-4f5a-acc1-6f9f55078b4f')) and (RowKey ne '9ada1d04-dd03-4b1f-ae26-bc51b7f35ec3')) and (RowKey ne 'cbb976e7-87c9-4b83-8007-355f3ce1061d')) and (RowKey ne '59e06da8-f343-4a98-9382-0bbf1ce21d3f')) and (RowKey ne 'ea0763d9-cb64-4750-a7e0-c1869ca82a15')) and (RowKey ne '926106f9-bf64-42db-b27f-be9bcb45907e')) and (RowKey ne '3cdcaf1e-ff36-4243-b865-9424e1f46427')) and (RowKey ne '8fcfc4ca-7510-4fa6-95d7-3ff2999de509')) and (RowKey ne '21b04d3a-32e0-4a59-900a-30babfa8fbd2')) and (RowKey ne 'df181ee8-4dc0-4232-b2bc-7e13dc8f2abf')) and (RowKey ne '6a0038d0-40a4-4a11-b626-9850b8178caa')) and (RowKey ne 'ffb78a96-85c9-4648-aba7-3baa56be6c1f')) and (RowKey ne '0c56afd8-164e-4574-9054-8824e6fc1969')) and (RowKey ne '51d95602-cfee-40bf-b9e3-b0f935083742')) and (RowKey ne 'f286d1fb-b9bb-4850-a911-58a77ffc7991')) and (RowKey ne '50280089-e61b-4874-8f99-8dc437f74181')) and (RowKey ne '05f72847-3936-4efc-9902-540a9d9c5ca8')) and (RowKey ne '066fcbe4-afee-46ff-b6d2-18e072f850a0')) and (RowKey ne '602f7cd4-7697-49ca-af51-e96c70eefed5')) and (RowKey ne '6858f832-a58e-4bcb-b57b-7c40d32cbcb6')) and (RowKey ne 'f9e2cc4f-a579-46f4-820b-e7d8ab9711c8')) and (RowKey ne 'cbb68c38-7e73-463b-8839-bef040803c37')) and (RowKey ne '1bf0d7ff-ece6-4ef1-9d48-ae424feea108')) and (RowKey ne 'de1dbdbf-8a75-40b4-bb9b-81e95ff4a542')) and (RowKey ne '31f5c94e-3eea-4c5a-8e60-8899eddc7829')) and (RowKey ne '94bc24a8-28e4-4b6d-965e-9360104bff21')) and (RowKey ne '93d7f84e-eb4b-4c7d-826b-196e8b46255e')) and (RowKey ne 'bd0d0989-7256-4ffc-8904-6135955b8aae')) and (RowKey ne '9c38865a-913c-468c-bd26-25e606f9b73a')) and (RowKey ne 'c2f9b899-76c1-4b04-aa68-fb3f903d81d4')) and (RowKey ne '9b478051-78b2-4a67-a25f-6cc09251ec33')) and (RowKey ne '908f80ea-e31f-4a85-9ff8-8cb80b601434')) and (RowKey ne '7775f8d4-2b9c-4903-a191-b54d214e380a')) and (RowKey ne '55b24a87-3259-4ca4-8869-f6d68f9c95c3')) and (RowKey ne '0476263d-cf8a-49b3-89f1-0bc8df9fd906')) and (RowKey ne 'f7a45c30-eba7-4944-b08b-a875d4b871e0')) and (RowKey ne '4136dc3c-10d3-4d58-afb9-23d8b422822e')) and (RowKey ne 'b6a40f8e-5284-4cbd-9356-2296df415650')) and (RowKey ne 'd63ecb62-5f48-4314-8a15-08c0b62ee835')) and (RowKey ne 'e3acdc4f-70a5-4a73-b023-1845ccf9ea51')

Testing results are summarized in the following table:

Num of conditions Filter length Query elpsed time
50 2876 12075
100 5726 10550
112 6410 10361
113 6467 10967
114 6524 Bad request
> 114   Bad request

So Table storage query maximum length is between 6467 and 6524 symbols. For us this knowledge was enough to proceed. Hope that this information will help someone.

Tuesday, May 12, 2020

How to run handler on ribbon open event and access ribbon elements via javascript in Sharepoint

Sometimes we need to run code which will be executed when Sharepoint ribbon is opened. In many cases ribbon is closed by default and you can’t access its elements in regular document.ready handler because ribbon elements are not created in the DOM yet. In this case you need to subscribe somehow on ribbon.open even and access elements there. This is how it can be done:

ExecuteOrDelayUntilScriptLoaded(function() {
 SP.Ribbon.PageManager.get_instance().add_ribbonInited(function(){
  console.log("Ribbon is opened");
 });
}, "sp.ribbon.js");

Interesting that even inside this handler you can’t access elements via jQuery selector because it doesn’t find these dynamically added elements. So e.g. the following code which tries to access New document tab in doclib will return 0:

$("#Ribbon.Documents.New").length

In order to access these dynamically added ribbon elements we need to use pure javascript call document.getElementById. Here is the full code:

ExecuteOrDelayUntilScriptLoaded(function() {
 SP.Ribbon.PageManager.get_instance().add_ribbonInited(function(){
  var el = document.getElementById("Ribbon.Documents.New");
  if (el) {
   console.log("New document tab is found");
  }
 });
}, "sp.ribbon.js");

Using this approach you will be able to access and manipulate ribbon elements via javascript.

Monday, April 20, 2020

Create Azure Notification Hub with configured Apple APNS and Google FCM using PowerShell and ARM templates

If you have Azure notification hub with configured Apple APNS and Google FCM and try to export ARM template of this notification hub you will find that APNS and FCM configuration won’t be included to this template. In order to add APNS and FCM you need to add the following properties to ARM template:

  • GcmCredential for FCM
  • ApnsCredential for APNS

If you use Token authentication type and Production endpoint for Apple APNS template will look like this:

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "hubName": {
            "type": "string"
        },
        "namespaceName": {
            "type": "string"
        },
        "googleApiKey": {
            "type": "string"
        },
        "appId": {
            "type": "string"
        },
        "appName": {
            "type": "string"
        },
        "keyId": {
            "type": "string"
        },
        "token": {
            "type": "string"
        },
    },
    "variables": {},
    "resources": [
        {
            "type": "Microsoft.NotificationHubs/namespaces/notificationHubs",
            "apiVersion": "2017-04-01",
            "name": "[concat(parameters('namespaceName'), '/', parameters('hubName'))]",
            "location": "North Europe",
            "properties": {
                "authorizationRules": [],
    "GcmCredential": {
     "properties": {
     "googleApiKey": "[parameters('googleApiKey')]",
     "gcmEndpoint": "https://android.googleapis.com/gcm/send"
     }
    },
    "ApnsCredential": {
     "properties": {
      "appId": "[parameters('appId')]",
      "appName": "[parameters('appName')]",
      "keyId": "[parameters('keyId')]",
      "token": "[parameters('token')]",
      "endpoint": "https://api.push.apple.com:443/3/device"
     }
    }
            }
        },
        {
            "type": "Microsoft.NotificationHubs/namespaces/notificationHubs/authorizationRules",
            "apiVersion": "2017-04-01",
            "name": "[concat(parameters('namespaceName'), '/', parameters('hubName'), '/DefaultFullSharedAccessSignature')]",
            "dependsOn": [
                "[resourceId('Microsoft.NotificationHubs/namespaces/notificationHubs', parameters('namespaceName'), parameters('hubName'))]"
            ],
            "properties": {
                "rights": [
                    "Listen",
                    "Manage",
                    "Send"
                ]
            }
        },
        {
            "type": "Microsoft.NotificationHubs/namespaces/notificationHubs/authorizationRules",
            "apiVersion": "2017-04-01",
            "name": "[concat(parameters('namespaceName'), '/', parameters('hubName'), '/DefaultListenSharedAccessSignature')]",
            "dependsOn": [
                "[resourceId('Microsoft.NotificationHubs/namespaces/notificationHubs', parameters('namespaceName'), parameters('hubName'))]"
            ],
            "properties": {
                "rights": [
                    "Listen"
                ]
            }
        }
    ]
}

In order to create new notification hub using this ARM template we need to call New-AzResourceGroupDeployment cmdlet and provide params specified in parameters section of the template:

$resourceGroupName = ...
$hubNamespaceName = ...
$hubName = ...
$googleApiKey = ...
$appId = ...
$appName = ...
$keyId = ...
$token = ...

New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateFile ./template.json -hubName $hubName -namespaceName $hubNamespaceName -googleApiKey $googleApiKey -appId $appId -appName $appName -keyId $keyId -token $token

It will create new Azure notification hub with specified name in specified namespace with configured Apple APNS (Token authentication type) and Googke FCM.

Thursday, April 16, 2020

Fix ResourceUnavailable error when try to install PowerShell module via Install-Module cmdlet

If you try to install PowerShell module via Install-Module cmdlet, e.g.:

Install-Module -Name SharePointPnPPowerShellOnline

you may get the following error:

WARNING: Source Location 'https://www.powershellgallery.com/api/v2/package/SharePointPnPPowerShellOnline/3.20.2004' is not valid.
PackageManagement\Install-Package : Package 'SharePointPnPPowerShellOnline' failed to download.
     + CategoryInfo          : ResourceUnavailable: (C:\Users\Develo...20.2004.0.nupkg:String) [Install-Package], Exception
     + FullyQualifiedErrorId : PackageFailedInstallOrDownload,Microsoft.PowerShell.PackageManagement.Cmdlets.InstallPackage

In order to fix it run the following command in your PowerShell session:

[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

After that run Install-Module again – error should disappear.

Sunday, April 5, 2020

Fix problem “Security token service cannot be activated” in Sharepoint farm after inplace upgrade Windows Server 2012 to R2

If you performed inplace upgrade of Windows Server 2012 to Windows Server 2012 R2 with Sharepoint Server running you may face with the following error after upgrade will be completed: when you will try to open any Sharepoint web application the following exception will be shown:

WebHost failed to process a request.
  Sender Information: System.ServiceModel.ServiceHostingEnvironment+HostingManager/12547953
  Exception: System.ServiceModel.ServiceActivationException: The service '/SecurityTokenServiceApplication/securitytoken.svc' cannot be activated due to an exception during compilation.  The exception message is: Exception has been thrown by the target of an invocation.. ---> System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.ArgumentNullException: Value cannot be null.
Parameter name: certificate
    at System.IdentityModel.Tokens.X509SecurityToken..ctor(X509Certificate2 certificate, String id, Boolean clone, Boolean disposable)
    at System.IdentityModel.Tokens.X509SecurityToken..ctor(X509Certificate2 certificate)


The error says that certificate for Secure token service is not specified. In order to fix this error you need to replace certificate for STS:

  1. Open IIS manager > Server certificates > Create Self-Signed Certificate
  2. After that export created certificate to local folder:

Next run the following PowerShell script which will update certificate for STS:

$pfxPath = "path to pfx"
$pfxPass = "certificate password"
$stsCertificate = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2 $pfxPath, $pfxPass, 20
Set-SPSecurityTokenServiceConfig -ImportSigningCertificate $stsCertificate
certutil -addstore -enterprise -f -v root $stsCertificate
iisreset
net stop SPTimerV4
net start SPTimerV4

After that open Sharepoint web app again.

Host React web app on Azure

In this article I will show how to host React web app on Azure cloud platform. We will create new test React web app from scratch, will build it, then will create in Azure new Resource group and App service and after that will transfer our React web app there. Before to start ensure that you have working Azure account with valid subscription (you may create free subscription for 1 month to try things out).

First of all we need to create React app. Before to do that you need to install nodejs on your PC. By default it will also install popular npm package manager and will add its folders to PATH env variable. When it is done run the following command in your working folder:

npx create-react-app test-react-app

It will create test-react-app folder and new test React app inside it. In order to test it on local dev web server run the following command:

npm start

It will launch web server and will open default browser with React app running on http://localhost:3000:

In order to host our React app on Azure we need to build it first. In order to do that run the following command:

npm run build

It will create ready for deployment build subfolder and will copy web app files there.

Once React app is ready we need to configure Azure for hosting it. Go to Azure portal https://portal.azure.com > Resource groups > Create resource group. On the opened window give name for the new resource group and select closer location:

When new resource group will be created choose New > Web App:

On the opened Window choose created resource group, give name for new web app, set the following parameters:
Publish = Code
Runtime stack = Node 10.14
Operating system = Windows
also select region (usually the same region which was used for creating resource group) and App service plan name. As we create new Web app for testing purposes we will use Free app service plan: under Sku and size click Change size and change App service plan to Dev/Test > F1 free:

and click Create. After that go to created App service > Deployment center > FTP > Dashboard > App credentials and copy FTP host name, username and password:

Use your favorite FTP client in order to connect to Azure app service with credentials copied on previous step. When connection will be established copy files from local /test-react-app/build folder to FTP /site/wwwroot folder. When it will be done try to open https://{name-of-your-web-app}.azurewebsites.net in web browser (instead of {name-of-your-web-app} use name of Azure web app which was used during it’s creation). If everything was done correctly you will see your React web app hosted in Azure:

Thursday, April 2, 2020

Problem with fetching photos of private O365 groups

As you probably know in O365 we may create groups which has one of the following visibilities:

  • Public
  • Private

Everybody in your organization may join/leave public groups while for private groups only owners of this group may add you to the group. In this article I will describe one problem related with fetching private group photos via Graph (see also my previous article where I mentioned another problem related with groups images: Why you should be careful with /groups/{id}/photo and /users/{id}/photo endpoints in MS Graph or unintentional getting photos of big sizes in Graph).

In order to fetch group photo the following Graph endpoint should be used:

https://graph.microsoft.com/v1.0/groups/{id}/photo

First of all we need to mention that this endpoint is available only via user delegated permissions (it doesn’t work with app-only permissions). If we will try to fetch photo of some group in Graph explorer using service account which is not member of this group we will get 404 Not found error:

After we will add the same service account to members of the same group image will be retrieved successfully:

Even if we will try to fetch photos of private group under global admin account which is not member of this group – we will still get 404 Not found. So the only way to fetch photo of the private group is to add user account to members or owners of this group. Be aware of this problem when will plan groups images fetching functionality.

Wednesday, April 1, 2020

One problem with AllowToAddGuests and AllowGuestsToAccessGroups O365 groups tenant settings

As you probably know AllowToAddGuests and AllowGuestsToAccessGroups tenant settings determine whether or not external users are able to access O365 groups in your tenant. You may view or change them via PowerShell:

AzureADPreview\Connect-AzureAD
Get-AzureADDirectorySetting

Which will show something like that:

If you will try to update them then you may face with different behavior on different tenants. E.g. if we will try to change them to true like this:

$groupsConfig = Get-AzureADDirectorySetting -Id {settingId}
$groupsConfig["AllowToAddGuests"] = $true
$groupsConfig["AllowGuestsToAccessGroups"] = $true
Set-AzureADDirectorySetting -Id {settingId} -DirectorySetting $groupsConfig

we may get the following result:

Note that in script we used lowercase $true while in result we got True with capital T. But if you will try to run the same script on another tenant it may save values using the same letters registry as used in script. I.e. $true will be saved as true and $True will be saved as True. And if software uses case-sensitive comparison it will cause problems. So be aware of this problem – hope it will be help someone.

Wednesday, March 18, 2020

How to provision modern Sharepoint page with custom SPFx web part via PnP template

If you developed custom SPFx web part and tested it on dev env you will most probably want to automate it’s provisioning to customers environments. In this article I will show how to do that via PnP template.

When you need to create PnP template the simplest option is to export it form existing site and copy those components which you need. This is much faster than trying to remember whole PnP schema.

So first of all we need to build sppkg in release mode (with “--ship” parameter) and upload it to App catalog:

gulp clean
gulp bundle --ship
gulp package-solution --ship

After that add your web part to some modern page – we will use test.aspx for example here. Now you have modern site running with your custom web part.

Next step is to export PnP template from your site:

Get-PnPProvisioningTemplate -Out template.xml

When export will be done edit template.xml and copy section pnp:ClientSidePages which may look like this (of course if you have several SPFx web parts there will be several pnp:CanvasControl instances):

<pnp:ClientSidePages>
 <pnp:ClientSidePage PageName="test.aspx" EnableComments="false" Publish="true">
   <pnp:Sections>
  <pnp:Section>
    <pnp:Controls>
   <pnp:CanvasControl WebPartType="Custom" JsonControlData="..." ControlId="..." Order="1" Column="1" />
    </pnp:Controls>
  </pnp:Section>
   </pnp:Sections>
 </pnp:ClientSidePage>
</pnp:ClientSidePages>

and paste it to PnP template which you are going to use for provisioning on customers environments. When this template will be applied to the site it will have modern page with your custom SPFx web part.

Monday, March 16, 2020

Move Mercurial repository to Git on Bitbucket

As you probably know Bitbucket will discontinue Mercurial support – all Mercurial repositories will be removed at June 1, 2020. So if you have Mercurial repositories there it is good to take care about them in advance and move them to Git which becomes basic source control in Bitbucket. Articles which I found had lack of some important information so it still took time to go through them. So I decided to write separate post and summarize all steps which are needed for moving Mercurial repositories to Git on Windows 10 PC:

1. Install latest version of TortoiseHG (older versions may not have HgGit plugin which will be needed below)
2. Install Git for Windows 3. Rename repository and make hg clone from new address
4. In HR repository folder enable HgGit plugin by adding following section to .hg/.hgrc file:
[extensions]
hggit=
5. Go to "C:\Program Files\Git\usr\bin" and run ssh-keygen.exe
Use default settings - it will add 2 files to C:\Users\{username}\.ssh: id_rsa and id_rsa.pub
6. Copy content of id_rsa.pub file
7. Login to bitbucket.org > Profile page > Settings > Security > SSH Keys > Add key > Insert content of copied id_rsa.pub
8. Go to hg repositoy folder:
8.1. in .hg/.hgrc add following line under [ui]:
ssh = "ssh.exe"
8.2. ensure that path "C:\Program Files\Git\usr\bin" (which is path to ssh.exe) is added to PATH environment variable 8.3 run the following command:
hg push git+ssh://{username}@bitbucket.org/{username}/{repository}.git
(git repository url can be copied from bitbucket and then replace https by git+ssh)

After that your repository should be available on Git with all version history and branches you had in Mercurial.

Thursday, March 12, 2020

Disable redirection from user details page UserDisp.aspx to MySite Person.aspx in Sharepoint

As you probably know Sharepoint has built in Created By and Modified By fields which are automatically added to all list items and documents which added to Sharepoint. Under the hood it is done by defining these fields in base content type derived by all other content types. If you add these fields to list views they will be clickable – it will be possible to click on user name and go to user details page UserDisp.aspx which will show user’s attributes (synced with AD if you use Windows authentication).

However in some cases you may notice that instead of UserDisp.aspx Sharepoint redirects to user profile page Person.aspx in MySites web application. If you are not planning to use user profiles and MySites in your Sharepoint site then this behavior may be unwanted and it may be needed to disable this automatic redirection.

Technically redirect is made by MySiteRedirection.ascx user control which is added by MySite farm scope feature (FeatureId = 69cc9662-d373-47fc-9449-f18d11ff732c) which has the following elements.xml file:

<Elements xmlns="http://schemas.microsoft.com/sharepoint/">
    <Control Id="ProfileRedirection" Sequence="100" ControlSrc="~/_controltemplates/mysiteredirection.ascx"/>
    <Control Id="MobileSiteNavigationLink1" Sequence="100"
        ControlClass="Microsoft.SharePoint.Portal.MobileControls.MobileMySitePageNavigation"
        ControlAssembly="Microsoft.SharePoint.Portal, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c">
    </Control>
</Elements>

In order to disable redirection we need to deactivate this feature:

Disable-SPFeature -Id 69cc9662-d373-47fc-9449-f18d11ff732c

After that after clicking user names in Created By/Modified By columns (and all other clickable columns of User or group type) user will be redirected to regular UserDisp.aspx page

Monday, February 17, 2020

Several mandatory steps for every SQL Server installation

Recently I’ve installed MS Sql Server (don’t remember how many times I’ve done that already :) ) on the new virtual machine and realized that I always perform the following steps after each Sql Server installation:

1. Disable “sa” login

This step was added to mandatory tasks list after sad story: in hte times of beginning of my IT carrier one of my Sql Server instances ran with enabled “sa” login. Password was not very strong and after some time it was brute forced and I got malicious Sql Server job which tried to download and execute remote code on my PC. Fortunately I noticed that in time and made necessary actions. After that I always disable built-in “sa” login on Sql Server – this is the first things hackers will try to brute force on your instance.

2. Configure backups

This step is quite obvious. Remember that you should not only test that your backups work by specified schedule (e.g. if you configured nightly backups – check after few days that backups are really made during last nights) but also test restore scenario. Some time these simple steps will save you from a lot of problems. You may configure backups in Sql Server Management Studio > Management > Maintenance plans > New maintenance plan > Add Back Up Database Task from toolbox: After configuring backup set schedule in the same window.

Another important note is that you should not store backups on the same PC. Safest option is to move backups from local PC to the cloud storage using some cloud backup tool.

3. Limit Sql Server trace log size

If Sql Server runs long period of time it may flood hard drive with trace logs (don’t mix it with transaction logs – these are different). By default they are stored in MSSQL/Logs subfolders under your Sql Server instance folder. In order to configure their size go to Sql Server Management studio > Management > Right click on SQL Server Logs > Configure. Set some value in “Limit the number of error log files before they are recycled” and/or “Maximum size for error log file in KB”:

This simple action will allow to keep Sql Server logs size controlled. If you have similar mandatory steps in your practice please share them in comments.

Tuesday, February 11, 2020

Resolve “Everyone except external users” group on old tenants

Some time ago I wrote how to get login name for special group “Everyone except external users” in Sharepoint Online: Get login name of special group “Everyone except external users” programmatically in Sharepoint. To remind: it is constructed like this:

"c:0-.f|rolemanager|spo-grid-all-users/” + tenantId

Today we faced with scenario when such login name could not be resolved on one customer’s tenant. After research it was found that similar issue was also reported on OfficeDevPnP github project page (see here). So as it turned out on old tenants this way of getting “Everyone except external users” may not work. As workaround you may use the following solution from OfficeDevPnP:

public override string GetEveryoneExceptExternalUsersName(Web web)
{
 string userIdentity = "";
 try
 {
  // New tenant
  userIdentity = $"c:0-.f|rolemanager|spo-grid-all-users/{web.GetAuthenticationRealm()}";
  var spReader = web.EnsureUser(userIdentity);
  web.Context.Load(spReader);
  web.Context.ExecuteQueryRetry();
 }
 catch (ServerException)
 {
  // Old tenants
  string claimName = web.GetEveryoneExceptExternalUsersClaimName();
  var claim = Utility.ResolvePrincipal(web.Context, web, claimName, PrincipalType.SecurityGroup, PrincipalSource.RoleProvider, null, false);
  web.Context.ExecuteQueryRetry();
  userIdentity = claim.Value.LoginName;
 }

 return userIdentity;
}

I.e. at first we try to get login name using "c:0-.f|rolemanager|spo-grid-all-users/” + tenantId and try to resolve group with this name. If it fails we call GetEveryoneExceptExternalUsersClaimName() extension method which returns localized name of “Everyone except external users” group for current tenant (it has translations of group name for all supported languages) and tries to resolve this special group using this name. This code will work both on new and old tenants.

Friday, January 24, 2020

How to set Always On for Azure Function app via PowerShell

If you run Azure functions on dedicated App service plan (vs Consumption plan) of Basic pricing tier and above you may speed up warmup time of Azure functions by enabling Always On setting (Azure Function app > Configuration > General settings):

If you need to automate this process here is the PowerShell script which can be used in order to set Always On for Azure Function app:

$resourceGroupName = ...
$functionAppName = ...
$webAppPropertiesObject = @{"siteConfig" = @{"AlwaysOn" = $true}}
$webAppResource = Get-AzureRmResource -ResourceType "microsoft.web/sites" -ResourceGroupName $resourceGroupName -ResourceName $functionAppName
$webAppResource | Set-AzureRmResource -PropertyObject $webAppPropertiesObject -Force

After that you Function app will have Always On setting enabled.

Tuesday, January 21, 2020

Get Azure AD groups images from Graph API using delegated permissions

In order to get Azure AD group image you need to use the following Graph API endpoint:

GET /groups/{id}/photo/$value

If you checked my previous blog post Why you should be careful with /groups/{id}/photo and /users/{id}/photo endpoints in MS Graph or unintentional getting photos of big sizes in Graph then you probably know that it is better to specify group size in endpoint – otherwise you will get biggest available high resolution image for the group. E.g. this is how you may get image with 64x64 px size:

GET /groups/{id}/photos/64x64//$value

However there is another problem with fetching groups images from Graph API which you have to care about: it should be done using delegated permissions. I.e. it is not possible to retrieve AAD groups images from Graph API using application permissions (at least on the moment of writing this blog post).

If you use Graph client library for C# you first need to create GraphServiceClient object and provide instance of class which implements IAuthenticationProvider interface and contains logic for authenticating requests using delegated permissions (via username and password). Here is how it may look like:

public class AzureAuthenticationProviderDelegatedPermissions : IAuthenticationProvider
{
 public async Task AuthenticateRequestAsync(HttpRequestMessage request)
 {
  var delegatedAccessToken = await GetGraphAccessTokenForDelegatedPermissionsAsync();

  request.Headers.Add("Authorization", "Bearer " + delegatedAccessToken);
 }

 public async Task<string> GetGraphAccessTokenForDelegatedPermissionsAsync()
 {
  string clientId = ...
  string userName = ...
  string password = ...
  string tenant = ...
  
  var creds = new UserPasswordCredential(userName, password);
  var authContext = new AuthenticationContext(string.Format("https://login.microsoftonline.com/{0}", tenant));
  var authResult = await authContext.AcquireTokenAsync("https://graph.microsoft.com", clientId, creds);
  return authResult.AccessToken;
 }
}

var graphClientDelegated = new GraphServiceClient(new AzureAuthenticationProviderDelegatedPermissions());

After that we may fetch group image from Graph API like that:

                var stream = Task.Run(async () =>
                {
                    var photo = await graphClient.Groups[groupId].Photos["64x64"].Content.Request().GetAsync();
                    return photo;

                }).GetAwaiter().GetResult();

Actual user account which is used for fetching groups images doesn’t need any special permissions: it may be regular user account without any admin rights.

Friday, January 17, 2020

Problems with Teams creation via beta teams Graph endpoint with owner without O365 license

When you create Team by sending HTTP POST request to beta Graph endpoint /beta/teams (see Create team) you need to specify exactly 1 user as an owner of the new team:

POST https://graph.microsoft.com/beta/teams
Content-Type: application/json
{
  "displayName": "Test",
  "owners@odata.bind": [
    "https://graph.microsoft.com/beta/users('userId')"
  ]
}

where userId is login name of the user which will be owner of the group. However this request may fail with the following error:

Invoking endpoint 'https://graph.microsoft.com/beta/teams/' didn't succeed
Response status code 'Forbidden', reason phrase 'Forbidden'
Response content '
"code": "AccessDenied",
”message": "Failed to execute Templates backend request CreateTeamFromTemplateRequest

It may happen if user which is specified as owner of the team doesn’t have O365 license. In order to avoid this error use users with O365 license as team owners.