Friday, January 14, 2022

How to get Azure storage account associated with Azure function app via PowerShell

As you probably know Azure function app is provisioned with Storage account behind it which is used for internal needs of function app (e.g. storing compiled dlls of Azure functions in blob storage of this storage account or storing App insights logs in tables). If we need to get instance of this storage account associated with Azure function app we may use the following PowerShell:

$app = Get-AzWebApp -Name "MyFunctionApp" -ResourceGroupName "MyResourceGroup"
$kv = $app.SiteConfig.AppSettings | Where-Object { $_.Name -eq "AzureWebJobsDashboard" }
$found = $kv.Value -match ".*;AccountName=(.+?);"
$storageAccountName = $matches[1]
$storageAccount = Get-AzStorageAccount -StorageAccountName $storageAccountName -ResourceGroupName "MyResourceGroup"

In above code we first get instance of function app using Get-AzWebApp cmdlet and find AzureWebJobsDashboard application setting which contains connection string of associated storage account. After that we retrieve storage account name from connection string using regex and finally call Get-AzStorageAccount to get actual storage account instance. Hope it will help someone.

Thursday, December 16, 2021

Camlex 5.4 release: switch to MIT license

Hello Sharepoint developers who use Camlex in the work. I'm glad to announce that starting with version 5.4 Camlex will use MIT license. Before that Camlex was distributed under Ms-Pl license but nowdays MIT became standard for open source projects as most permissive license (e.g. PnP.Framework also uses MIT license). In order to be inline with the trend I changed Camlex license to MIT. New nuget packages with 5.4 version for basic object model version and CSOM version are already available for download.

Wednesday, December 1, 2021

How to run continuous Azure web job as singleton

Continuous Azure web jobs may be used as subscribers to some external events (e.g. new item has been added to Azure storage queue). In opposite to scheduled based web jobs which are running by scheduler (you need to specify CRON expression for them) continuous web jobs are always running and react on events they are subscribed to.

When you scale your Azure app service on multiple instances (web jobs may run on different VMs in background) by default web jobs are also scaled i.e. they will run on all instances. However it is possible to change this behavior and run continuous web job as singleton only on 1 instance.

When create continuous web job in Azure portal there is Scale field which is by default set to Multi instance:


As tooltip says:

Multi-instance will scale your WebJob across all instances of your App Service plan, single instance will only keep a single copy of your WebJob running regardless of App Service plan instance count.

So during creation of web job we may set Scale = Single instance and Azure will create it as singleton.

If you don't want to rely on this setting which can be changed from UI you may add settings.job file with the following content:

{ "is_singleton": true }

to the root folder of your web job (the same folder which contains executable file of web job). In this case Azure will create web job as singleton even if Scale = Multi instance select is selected in UI. I.e. settings.job has priority over UI setting.

If you will check logs of continuous web job which was created using above method you should see something like that:

Status changed to Starting
WebJob singleton settings is True
WebJob singleton lock is acquired

 Which will prove that job runs as singleton.


Thursday, November 25, 2021

Sharepoint Online remote event receivers attached with Add-PnPEventReceiver depend on used authentication method

Today I have faced with another strange problem related with SPO remote event receivers (wrote about another strange problem here: Strange problem with remote event receivers not firing in Sharepoint Online sites which urls/titles ends with digits). I attach remote event receiver using Add-PnPEventReceiver cmdlet like that:

Connect-PnPOnline -Url ...
$list = Get-PnPList "MyList"
Add-PnPEventReceiver -List $list.Id -Name TestEventReceiver -Url ... -EventReceiverType ItemUpdated -Synchronization Synchronous

In most cases RER is attached without any errors. But the question will it be fired after that. As it turned out it depends how exactly you connect to the parent SPO site with Connect-PnPOnline: remote event receiver is attached successfully in all cases (you may see them in SPO client browser) but in some cases they won't be triggered. In fact I found that they are triggered only if you connect with "UseWebLogin" parameter, while in all other cases they are not. In the below table I summarized all methods which I tried:

# Method Is RER fired?
1
Connect-PnPOnline -Url ... -ClientId {clientId}-Interactive
No
2 Connect-PnPOnline -Url ... -ClientId {clientId}     No
3 Connect-PnPOnline -Url ...
No
4 Connect-PnPOnline -Url ... -UseWebLogin
Yes


Monday, November 22, 2021

Strange problem with remote event receivers not firing in Sharepoint Online sites which urls/titles ends with digits

Some time ago I wrote about Sharepoint Online remote event receivers (RERs) and how to use Azure functions with them: Use Azure function as remote event receiver for Sharepoint Online list and debug it locally with ngrok. When I tested RERs on another SPO sites I've faced with very strange problem: on those sites which urls/titles end with digits remote event receivers were not fired. E.g. I often create modern Team/Communication sites with datetime stamp at the end:

https://{tenant}.sharepoint.com/sites/{Prefix}{yyyyMMddHHmm}
e.g.
https://{tenant}.sharepoint.com/sites/Test202111221800

I noticed that on such sites RER was not called because of some reason. I've used the same PowerShell script for attaching RER to SPO site as described in above article and the same Azure function app running locally with the same ports and ngrok tunneling.

After I created site without digits at the end (https://{tenant}.sharepoint.com/sites/test) - RER started to work (without restarting AF or ngrok - i.e. I used the same running instances of AF/ngrok for all tests which ran all the time). Didn't find any explanation of this problem so far. If you faced with this issue and know the reason please share it.

Friday, November 12, 2021

Fix Azure functions error "Repository has more than 10 non-decryptable secrets backups (host)"

Today I've faced with strange issue: after update of Azure function app which was made from Visual Studio Azure functions stopped working. The following error was shown in the logs:

Repository has more than 10 non-decryptable secrets backups (host)

In order to fix this error perform the following steps:

1. In Azure portal go to Azure function app > Deployment center > FTPS credentials and copy credentials for connecting to function app by FTP:


2. Then connect to Azure function app by FTP e.g. using WinSCP client.

3. Go to /data/functions/secrets folder and remove all files which have name in the following form:
*.snapshot.{timestamp}.json

4. After that go to Azure portal > Function app and restart it. It should be started now.

Wednesday, November 3, 2021

How to configure number of retries for processing failed Azure queue message before it will be moved to poison queue

In my previous post I showed how to configure Azure web job host to set number of queue messages which may be processed simultaneously (in parallel): How to set limit on simultaneously processed queue messages with continuous Azure web job. In this post I will show how to configure web job host for processing failed messages.

As it was mentioned in previous post when message is added to Azure queue runtime will call method with 1st parameter decorated with [QueueTrigger] attribute. If there was unhandled exception this message will be marked as failed. Azure runtime will try to process this message again - by default there are maximum 5 retries for failed messages.

If we don't need to make 5 attempts (e.g. if our logic requires only 1 attempt) we may change this behavior using the following code:

public virtual void RunContinuous()
{
    var storageConn = ConfigurationManager.ConnectionStrings["AzureWebJobsDashboard"].ConnectionString;
    var config = new JobHostConfiguration();
    
    config.Queues.MaxDequeueCount = 1;
    
    var host = new JobHost(config);
    host.RunAndBlock();
}
 
public void ProcessQueueMessage([QueueTrigger("orders")]string message, TextWriter log)
{
    log.WriteLine("New message has arrived from queue");
    ...
}

I.e. we set config.Queues.MaxDequeueCount to 1. In this case failed message will be moved to poison queue immediately and web job won't try to process it again.