Thursday, November 25, 2021

Sharepoint Online remote event receivers attached with Add-PnPEventReceiver depend on used authentication method

Today I have faced with another strange problem related with SPO remote event receivers (wrote about another strange problem here: Strange problem with remote event receivers not firing in Sharepoint Online sites which urls/titles ends with digits). I attach remote event receiver using Add-PnPEventReceiver cmdlet like that:

Connect-PnPOnline -Url ...
$list = Get-PnPList "MyList"
Add-PnPEventReceiver -List $list.Id -Name TestEventReceiver -Url ... -EventReceiverType ItemUpdated -Synchronization Synchronous

In most cases RER is attached without any errors. But the question will it be fired after that. As it turned out it depends how exactly you connect to the parent SPO site with Connect-PnPOnline: remote event receiver is attached successfully in all cases (you may see them in SPO client browser) but in some cases they won't be triggered. In fact I found that they are triggered only if you connect with "UseWebLogin" parameter, while in all other cases they are not. In the below table I summarized all methods which I tried:

# Method Is RER fired?
1
Connect-PnPOnline -Url ... -ClientId {clientId}-Interactive
No
2 Connect-PnPOnline -Url ... -ClientId {clientId}     No
3 Connect-PnPOnline -Url ...
No
4 Connect-PnPOnline -Url ... -UseWebLogin
Yes


Monday, November 22, 2021

Strange problem with remote event receivers not firing in Sharepoint Online sites which urls/titles ends with digits

Some time ago I wrote about Sharepoint Online remote event receivers (RERs) and how to use Azure functions with them: Use Azure function as remote event receiver for Sharepoint Online list and debug it locally with ngrok. When I tested RERs on another SPO sites I've faced with very strange problem: on those sites which urls/titles end with digits remote event receivers were not fired. E.g. I often create modern Team/Communication sites with datetime stamp at the end:

https://{tenant}.sharepoint.com/sites/{Prefix}{yyyyMMddHHmm}
e.g.
https://{tenant}.sharepoint.com/sites/Test202111221800

I noticed that on such sites RER was not called because of some reason. I've used the same PowerShell script for attaching RER to SPO site as described in above article and the same Azure function app running locally with the same ports and ngrok tunneling.

After I created site without digits at the end (https://{tenant}.sharepoint.com/sites/test) - RER started to work (without restarting AF or ngrok - i.e. I used the same running instances of AF/ngrok for all tests which ran all the time). Didn't find any explanation of this problem so far. If you faced with this issue and know the reason please share it.

Friday, November 12, 2021

Fix Azure functions error "Repository has more than 10 non-decryptable secrets backups (host)"

Today I've faced with strange issue: after update of Azure function app which was made from Visual Studio Azure functions stopped working. The following error was shown in the logs:

Repository has more than 10 non-decryptable secrets backups (host)

In order to fix this error perform the following steps:

1. In Azure portal go to Azure function app > Deployment center > FTPS credentials and copy credentials for connecting to function app by FTP:


2. Then connect to Azure function app by FTP e.g. using WinSCP client.

3. Go to /data/functions/secrets folder and remove all files which have name in the following form:
*.snapshot.{timestamp}.json

4. After that go to Azure portal > Function app and restart it. It should be started now.

Wednesday, November 3, 2021

How to configure number of retries for processing failed Azure queue message before it will be moved to poison queue

In my previous post I showed how to configure Azure web job host to set number of queue messages which may be processed simultaneously (in parallel): How to set limit on simultaneously processed queue messages with continuous Azure web job. In this post I will show how to configure web job host for processing failed messages.

As it was mentioned in previous post when message is added to Azure queue runtime will call method with 1st parameter decorated with [QueueTrigger] attribute. If there was unhandled exception this message will be marked as failed. Azure runtime will try to process this message again - by default there are maximum 5 retries for failed messages.

If we don't need to make 5 attempts (e.g. if our logic requires only 1 attempt) we may change this behavior using the following code:

public virtual void RunContinuous()
{
    var storageConn = ConfigurationManager.ConnectionStrings["AzureWebJobsDashboard"].ConnectionString;
    var config = new JobHostConfiguration();
    
    config.Queues.MaxDequeueCount = 1;
    
    var host = new JobHost(config);
    host.RunAndBlock();
}
 
public void ProcessQueueMessage([QueueTrigger("orders")]string message, TextWriter log)
{
    log.WriteLine("New message has arrived from queue");
    ...
}

I.e. we set config.Queues.MaxDequeueCount to 1. In this case failed message will be moved to poison queue immediately and web job won't try to process it again.

Tuesday, November 2, 2021

How to set limit on simultaneously processed queue messages with continuous Azure web job

With continuous Azure web job we may create web job host which will listen to specified Azure storage queue. When new message will be added to queue Azure will trigger handler method found in assembly of continuous web job (public method which has parameter with [QueueTrigger] attribute):

public virtual void RunContinuous()
{
    var storageConn = ConfigurationManager.ConnectionStrings["AzureWebJobsDashboard"].ConnectionString;
    var config = new JobHostConfiguration();
    config.StorageConnectionString = storageConn;
    var host = new JobHost(config);
    host.RunAndBlock();
}

public void ProcessQueueMessage([QueueTrigger("orders")]string message, TextWriter log)
{
    log.WriteLine("New message has arrived from queue");
	...
}

If several messages were added simultaneously Azure will trigger several instances of handler method in parallel. Number of messages which can be processed in parallel can be configured via JobHostConfiguration.Queues.BatchSize property. By default it is set to 16 i.e. by default 16 messages can be processed simultaneously in one batch:

 

If we want e.g. configure continuous web job so it will process only 1 message at time we can set BatchSize to 1:

public virtual void RunContinuous()
{
    var storageConn = ConfigurationManager.ConnectionStrings["AzureWebJobsDashboard"].ConnectionString;
    var config = new JobHostConfiguration();
    config.StorageConnectionString = storageConn;
	
    // set batch size to 1
    config.Queues.BatchSize = 1;
	
    var host = new JobHost(config);
    host.RunAndBlock();
}

public void ProcessQueueMessage([QueueTrigger("orders")]string message, TextWriter log)
{
    log.WriteLine("New message has arrived from queue");
	...
}

Using the same approach we can increase BatchSize from default 16 to bigger value. However as you may notice from above screenshot MaxBatchSize is set to 32 to you may increase BatchSize only up to 32.