Showing posts with label Troubleshooting. Show all posts
Showing posts with label Troubleshooting. Show all posts

Thursday, June 22, 2023

Implement TAP/multithread friendly logging scopes for Microsoft.Extensions.Logging.ILogger

Some time ago I wrote a post how to implement custom logger which writes logs to Azure storage blob container: Custom logger for .Net Core for writing logs to Azure BLOB storage. This logger implements ILogger interface from Microsoft.Extensions.Logging namespace. It works quite well but doesn't support logging scopes:

public IDisposable BeginScope<TState>(TState state) => default!;

Logging scopes are quite useful - they allow to specify additional information e.g. from each method log record was added, etc:

public void Foo()
{
    using (logger.BeginScope("Outer scope"))
    {
        ...
        using (logger.BeginScope("Inner scope"))
        {
        }
    }
}

Important requirement for logging scopes is that it should work properly in Task-based asynchronous pattern (TAP) and multithread code (which is widely used nowadays). For that we will use AsyncLocal<T> class from .NET. For implementing scopes themselves we will use linked list (child-parent relation).

For implementing it we will create LogScopeProvider class which implements Microsoft.Extensions.Logging.IExternalScopeProvider interface (for creating this custom LogScopeProvider I used code from Microsoft.Extensions.Logging.Console namespace as base example):

public class LogScopeProvider : IExternalScopeProvider
{
    private readonly AsyncLocal<LogScope> currentScope = new AsyncLocal<LogScope>();

    public object Current => this.currentScope.Value?.State;

    public LogScopeProvider() {}

    public void ForEachScope<TState>(Action<object, TState> callback, TState state)
    {
        void Report(LogScope current)
        {
            if (current == null)
            {
                return;
            }
            Report(current.Parent);
            callback(current.State, state);
        }

        Report(this.currentScope.Value);
    }

    public IDisposable Push(object state)
    {
        LogScope parent = this.currentScope.Value;
        var newScope = new LogScope(this, state, parent);
        this.currentScope.Value = newScope;

        return newScope;
    }

    private class LogScope : IDisposable
    {
        private readonly LogScopeProvider provider;
        private bool isDisposed;

        internal LogScope(LogScopeProvider provider, object state, LogScope parent)
        {
            this.provider = provider;
            State = state;
            Parent = parent;
        }

        public LogScope Parent { get; }

        public object State { get; }

        public override string ToString()
        {
            return State?.ToString();
        }

        public void Dispose()
        {
            if (!this.isDisposed)
            {
                this.provider.currentScope.Value = Parent;
                this.isDisposed = true;
            }
        }
    }
}

Note that LogScopeProvider stores scopes as AsyncLocal<LogScope> which allows to use it in TAP code. So e.g. if we have await inside using(scope) it will be handled correctly:

public async Task Foo()
{
    using (logger.BeginScope("Outer scope"))
    {
        var result = await Bar();
        ...
    }
}

Now returning to our BlobStorage: all we have to is to pass LogScopeProviderto its constructor, add current scope to the log record and return new scope when it is requested (check bold code):

public class BlobLogger : ILogger
{
    private const string CONTAINER_NAME = "custom-logs";
    private string connStr;
    private string categoryName;
    private LogScopeProvider scopeProvider;
 
    public BlobLogger(string categoryName, string connStr, LogScopeProvider scopeProvider)
    {
        this.connStr = connStr;
        this.categoryName = categoryName;
        this.scopeProvider = scopeProvider;
    }
 
    public void Log<TState>(LogLevel logLevel, EventId eventId, TState state, Exception? exception,
        Func<TState, Exception?, string> formatter)
    {
        if (!IsEnabled(logLevel))
        {
            return;
        }
 
        string scope = this.scopeProvider.Current as string;
        using (var ms = new MemoryStream(Encoding.UTF8.GetBytes($"[{this.categoryName}: {logLevel,-12}] {scope} {formatter(state, exception)}{Environment.NewLine}")))
        {
            var container = this.ensureContainer();
            var now = DateTime.UtcNow;
            var blob = container.GetAppendBlobClient($"{now:yyyyMMdd}/log.txt");
            blob.CreateIfNotExists();
            blob.AppendBlock(ms);
        }
    }
 
    private BlobContainerClient ensureContainer()
    {
        var container = new BlobContainerClient(this.connStr, CONTAINER_NAME);
        container.CreateIfNotExists();
        return container;
    }
 
    public bool IsEnabled(LogLevel logLevel) => true;
 
    public IDisposable BeginScope<TState>(TState state) => this.scopeProvider.Push(state);
}

That's it: now our logger also supports logging scopes.

Wednesday, September 7, 2022

Fix problem with PnP.PowerShell log file locked for writing by Set-PnPTraceLog

If you use PnP.PowerShell then most probably you are familiar with Set-PnPTraceLog cmdlet which allows to enable logging from PnP cmdlets. This is useful feature which simplifies troubleshooting especially if error got reproduced only on customer's environment. However there is one unpleasant side effect: Set-PnPTraceLog locks log file for writing. I.e. it is not possible to reuse the same file for other purposes and write there from anoter sources. Let's see why it happens.

Internally Set-PnPTraceLog uses TextWriterTraceListener (thanks Gautam Sheth for sharing it):

If we will decompile code of TextWriterTraceListener (which is quite easy in VS2022) we will find that it opens FileStream with FileShare.Read option:


And that's exactly the reason why it is not possible to write anything else to this log file until PowerShell session where Set-PnPTraceLog was called will be closed.

In order to solve this problem we need to use our own FileStream and inject it to PnP logging system. It can be done by the following PowerShell code:

# enable logging
Set-PnPTraceLog -On -LogFile $logFilePath -Level Debug
# close default file stream
[System.Diagnostics.Trace]::Listeners[1].Writer.Close()
# open new file stream with FileShare.ReadWrite
$fileStream = New-Object System.IO.FileStream($logFilePath, [System.IO.FileMode]::Append, [System.IO.FileAccess]::Write, [System.IO.FileShare]::ReadWrite, 4096)
# inject new file stream to PnP
[System.Diagnostics.Trace]::Listeners[1].Writer = New-Object System.IO.StreamWriter($fileStream, [System.Text.Encoding]::UTF8, 4096, $false)

Here we first enable PnP logging Set-PnPTraceLog. Every process by default already has trace listener called DefaultTraceListener which is accessible from Trace.Listeners[0]. In order to access listener added by Set-PnPTraceLog we need access Trace.Listeners[1]. So after that we open new FileStream with FileShare.ReadWrite and then use it for StreamWriter which we inject to PnP. As result PnP will use our own FileStream and it will be possible to write to the same log from another sources.

Wednesday, August 17, 2022

Get Sharepoint data in browser console via Rest API without additional tools

Sometimes during troubleshooting you need to quickly get some data from Sharepoint, e.g. id of current site collection. There are many ways how to do that with additional tools e.g. from PowerShell and PnP, SPEditor Chrome extension and it's pnpjs console, etc. But it requires installation of these tools and their knowledge (of course if you work with Sharepoint it will be better if you will know these tools :) ).

One way how you may get this data without extra tools is to use SP Rest API directly from browser console. E.g. for getting site collection details we may fetch /_api/site endpoint and output JSON response to console:

fetch("https://{mytenant}.sharepoint.com/sites/test/_api/site", {headers: {"accept": "application/json; odata=verbose"}}).then(response => response.json().then(txt => console.log(JSON.stringify(txt))))

(here instead of {mytenant} you should use your tenant name. Note however that this approach will also work in on-prem)

It will output a lot of information about current site collection to the console:

{
    "d": {
        "__metadata": {
            "id": "https://{mytenant}.sharepoint.com/sites/test/_api/site",
            "uri": "https://{mytenant}.sharepoint.com/sites/test/_api/site",
            "type": "SP.Site"
        },
        "Audit": {
            "__deferred": {
                "uri": "https://{mytenant}.sharepoint.com/sites/test/_api/site/Audit"
            }
        },
        "CustomScriptSafeDomains": {
            "__deferred": {
                "uri": "https://{mytenant}.sharepoint.com/sites/test/_api/site/CustomScriptSafeDomains"
            }
        },
        "EventReceivers": {
            "__deferred": {
                "uri": "https://{mytenant}.sharepoint.com/sites/test/_api/site/EventReceivers"
            }
        },
        "Features": {
            "__deferred": {
                "uri": "https://{mytenant}.sharepoint.com/sites/test/_api/site/Features"
            }
        },
        "HubSiteSynchronizableVisitorGroup": {
            "__deferred": {
                "uri": "https://{mytenant}.sharepoint.com/sites/test/_api/site/HubSiteSynchronizableVisitorGroup"
            }
        },
        "Owner": {
            "__deferred": {
                "uri": "https://{mytenant}.sharepoint.com/sites/test/_api/site/Owner"
            }
        },
        "RecycleBin": {
            "__deferred": {
                "uri": "https://{mytenant}.sharepoint.com/sites/test/_api/site/RecycleBin"
            }
        },
        "RootWeb": {
            "__deferred": {
                "uri": "https://{mytenant}.sharepoint.com/sites/test/_api/site/RootWeb"
            }
        },
        "SecondaryContact": {
            "__deferred": {
                "uri": "https://{mytenant}.sharepoint.com/sites/test/_api/site/SecondaryContact"
            }
        },
        "UserCustomActions": {
            "__deferred": {
                "uri": "https://{mytenant}.sharepoint.com/sites/test/_api/site/UserCustomActions"
            }
        },
        "AllowCreateDeclarativeWorkflow": false,
        "AllowDesigner": true,
        "AllowMasterPageEditing": false,
        "AllowRevertFromTemplate": false,
        "AllowSaveDeclarativeWorkflowAsTemplate": false,
        "AllowSavePublishDeclarativeWorkflow": false,
        "AllowSelfServiceUpgrade": true,
        "AllowSelfServiceUpgradeEvaluation": true,
        "AuditLogTrimmingRetention": 90,
        "ChannelGroupId": "00000000-0000-0000-0000-000000000000",
        "Classification": "",
        "CompatibilityLevel": 15,
        "CurrentChangeToken": {
            "__metadata": {
                "type": "SP.ChangeToken"
            },
            "StringValue": "..."
        },
        "DisableAppViews": false,
        "DisableCompanyWideSharingLinks": false,
        "DisableFlows": false,
        "ExternalSharingTipsEnabled": false,
        "GeoLocation": "EUR",
        "GroupId": "00000000-0000-0000-0000-000000000000",
        "HubSiteId": "00000000-0000-0000-0000-000000000000",
        "Id": "32d406dc-dc97-46dd-b01c-e6346419ceb7",
        "SensitivityLabelId": null,
        "SensitivityLabel": "00000000-0000-0000-0000-000000000000",
        "IsHubSite": false,
        "LockIssue": null,
        "MaxItemsPerThrottledOperation": 5000,
        "MediaTranscriptionDisabled": false,
        "NeedsB2BUpgrade": false,
        "ResourcePath": {
            "__metadata": {
                "type": "SP.ResourcePath"
            },
            "DecodedUrl": "https://{mytenant}.sharepoint.com/sites/test"
        },
        "PrimaryUri": "https://{mytenant}.sharepoint.com/sites/test",
        "ReadOnly": false,
        "RequiredDesignerVersion": "15.0.0.0",
        "SandboxedCodeActivationCapability": 2,
        "ServerRelativeUrl": "/sites/test",
        "ShareByEmailEnabled": false,
        "ShareByLinkEnabled": false,
        "ShowUrlStructure": false,
        "TrimAuditLog": true,
        "UIVersionConfigurationEnabled": false,
        "UpgradeReminderDate": "1899-12-30T00:00:00",
        "UpgradeScheduled": false,
        "UpgradeScheduledDate": "1753-01-01T00:00:00",
        "Upgrading": false,
        "Url": "https://{mytenant}.sharepoint.com/sites/test",
        "WriteLocked": false
    }
}

sUsing the same approach you may call another Rest API end points directly from browser console. It may save your time during troubleshooting. Hope this information will help someone.

Monday, November 18, 2013

How to restore site collection from higher Sharepoint version

Sometimes you may face with situation that bug is only reproducibly on production, but not e.g. on QA or on your local development environment. Such problems are much harder to troubleshoot. Often they are caused by the content which exist only on production. And if troubleshooting directly on production is problematic (e.g. if you don’t have remote desktop access to it), you should get backup of site collection or whole content database, restore it on local dev env and try to reproduce bug here. But what to do if you have lower Sharepoint version on you local environment, than on production? Of course it is better to have the same versions, but world is not ideal and sometimes we may face with such situation. In this post I will show the trick of how to restore site collection from the higher Sharepoint version. Before to start I need to warn that this is actually a hack and you should not rely on it. There is no guarantee that it will work in your particular case, because new Sharepoint version may have different schema, incompatible with previous one (that’s why standard way is not allowed).

Ok, suppose that we have site collection backup, which is created with Backup-SPSite cmdlet:

Backup-SPSite http://example1.com -Path C:\Backup\example1.bak

We copied it on local environment and want to restore it with Restore-SPSite:

Restore-SPSite http://example2.com -Path C:\Backup\example1.bak –Confirm:$false

(Here I intentionally used different urls for source and target sites in order to show that it is possible to restore site collection to the different url). If we have lower Sharepoint version on the local environment we will get unclear nativehr exception, which won’t say anything. But if we will make our logging verbose and check Sharepoint logs, we will find the following error:

Could not deserialize site from C:\Backup\example1.bak. Microsoft.SharePoint.SPException: Schema version of backup 15.0.4505.1005 does not match current schema version 15.0.4420.1017.

(Exact version number is not important. For this post it is only important that source version 15.0.4505.1005 is higher than target version 15.0.4420.1017).

What to do in this case? Mount-SPContentDatabase also won’t work because of the same reason, i.e. content database backup also won’t work. In this case we can either update our environment (and you should consider this option as basic) or go by non-standard way.

For non-standard way we will need hex editor. At first I thought that site collection backup is regular .cab file, so it will be possible to uncompress it, edit text files inside it and compress back (I described this trick in this post: Retain object identity during export and import of subsites in Sharepoint into different site collection hierarchy location), but this is not the case with site collection backups made with mentioned cmdlets. They look like regular binary files. So we will need some hex editor for modifying it. I used HxD hexeditor, but you can use any other as well.

If we will open backup file in it and will try to find the version, which we got from error message from the log, we will find that it is located in the beginning of the file:

image

The good thing is that version is stored only once. So we will change source version to the target in the hex editor now:

image

Now save it and run Restore-SPSite again. This time restore should work. Hope that this trick will help someone. But remember that it is hack and use it carefully.

Sunday, September 2, 2012

Debug issues on production Sharepoint farm

In Sharepoint development it is not unusual when you have multiple working environments: development, QA, production. Development environment in most cases is single-farm environment, while QA is similar to production and has several WFEs, app and db server. Also in multi-vendors project it may be so that you as software provider don’t have access to QA and production: it is under control of another company responsible for IT infrastructure.

Such projects require more accurate development and quality assurance. However it still may happens that solution works properly on dev env, but after deploying it to QA problem occurs. What to do in such situation? How to troubleshoot issues if you even don’t have remote desktop access?

You need mechanism which is powerful and flexible enough to figure out where problem comes from and doesn’t require a lot of efforts from infrastructure maintenance company. One of the most efficient way to troubleshoot in such situation which I found during working over many multi-vendor projects is to create custom application layouts page, ask administrator from infrastructure company to copy it to 14/layouts subfolder on one of WFE and open it here in context of production site.

Page itself may have any logic implemented via server code. Code may be embedded into aspx page as server-side script:

   1: <%@ Page Language="C#" %>
   2: <%@ Assembly Name="Microsoft.SharePoint, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
   3: <%@ Import Namespace="Microsoft.SharePoint" %>
   4: <%@ Import Namespace="System.Web" %>
   5:  
   6: <%
   1:  
   2:     this.lbl.Text = SPContext.Current.Web.CurrentUser.Name;
%>
   7:  
   8: CurrentUser:&nbsp;<asp:Label ID="lbl" runat="server" />

By default you may use C# 2.0 in server-side scripts. In one of my previous posts I wrote how to enable C# 3.0 in application layouts pages: see Use C# 3.0 features in application layout pages in Sharepoint.

If you need to test a lot of code it may require time to embed the codebehind code to aspx page. There is another way to execute server code: with it you will have aspx page and separate .cs file with the logic. The method is based on CodeFile attribute for Page directive. In this case codebehind class will be compiled in runtime by ASP.Net. You need to specify path to .cs file in this attribute and then in Inherits attribute specify page class from this .cs file. Here is example:

   1: <%@ Page Language="C#" CodeFile="~/_layouts/test/Test.aspx.cs" Inherits="MyNamespace.Test" %>
   2: <%@ Assembly Name="Microsoft.SharePoint, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
   3: <%@ Import Namespace="Microsoft.SharePoint" %>
   4: <%@ Import Namespace="System.Web" %>
   5:  
   6: CurrentUser:<asp:Label ID="lbl" runat="server" />

In Test.aspx.cs you need to specify base class of the page:

   1: using System;
   2: using System.IO;
   3: using System.Web;
   4: using Microsoft.SharePoint;
   5:  
   6: namespace MyNamespace
   7: {
   8:     public partial class Test : LayoutsPageBase
   9:     {
  10:         public override void OnLoad(EventArgs e)
  11:         {
  12:             this.lbl.Text = SPContext.Current.Web.CurrentUser.Name;
  13:         }
  14:     }
  15: }

Note that there is no need to specify protected controls variables in page class. They will be added automatically in runtime by ASP.Net compiler in partial class (that’s why you need to make your custom page class partial as it is shown in example above). This is quite powerful technique which will allow to test existing application layout pages almost without changing them. Also it requires minimum actions from farm administrator from infrastructure maintenance company which in real life is very important advantage.

Thursday, August 30, 2012

Problem with resolving resource strings in Sharepoint via SPUtility.GetLocalizedString

Method SPUtility.GetLocalizedString can be used in order to get translated string from your resource file in runtime in Sharepoint web site. Resource files should be located in 14/Resources folder. We often use it during development. Recently we faced with strange situation: there was a farm with 2 WFEs. On this farm we updated wsp package which with other artifacts contained updates for provisioning resources, i.e. we added several new strings into resx files which are provisioned to 14/Resources. After wsp upgrade we ran iisreset on both WFE (SPUtility.GetLocalizedString caches results).

After that problem occurred: SPUtility.GetLocalizedString showed translated string from resx file only on one WFE, while on another it didn’t find it. We checked that resx files were updated correctly on problematic WFE, and that they are the same as on WFE where everything was working.

During investigating the problem in ULS I found the following errors:

Failed to look up string with key "key_name", keyfile MyResources
Localized resource for token 'key_name' could not be found for file with path: "(unavailable)"

And it was strange because as I said resx files were updated correctly. Also during investigation we tried to copy resx files into App_GlobalResources, but it didn’t help.

In the project we had the following provisioning resources:

  • MyResources.resx – default English culture
  • MyResources.fi-fi.resx – Finnish resources
  • MyResources.nl-nl.resx – Dutch resources

I created simple application layouts page which calls SPUtility.GetLocalizedString with 3 locale ids (English = 1033, Finnish = 1035 and Dutch = 1043) and opened it in context of site on problematic WFE. It successfully found Finnish and Dutch resources, but not default English resources. How it could be if MyResources.resx was the same as in working WFE?

I went to 14/Resources folder in order to check MyResources.resx more carefully, and found that there was another culture-specific resource file for English locale: MyResources.en-us.resx. It was added manually by someone for some testing and was not deleted after that. Of course it didn’t contain our new strings. But SPUtility.GetLocalizedString found this file and as it was culture-specific, it had more priority over default culture file MyResources.resx. When we deleted MyResources.en-us.resx from problematic WFE, it become working. Hope that this story will help you if you will face with the same problem.

Saturday, August 25, 2012

Use Developer dashboard with UpdatePanel in ajax-enabled Sharepoint sites

Developer dashboard is convenient tool for troubleshooting in Sharepoint. It allows you to see how many time each function is executed during request lifetime (by default it shows some standard functions. If you want to see your own custom functions you have to enclose their body within SPMonitoredScope). There is a lot of resources available in internet which show in details how to configure and us it. I won’t repeat it here. Instead I will show how to use Developer dashboard in ajax-enabled web applications, where most of communications with user occur inside UpdatePanel.

The common usage of Developer dashboard in this scenario is the following: on the masterpage there is DeveloperDashboard and DeveloperDashboardLauncher controls. Main content placeholder is located within UpdatePanel. Also it is possible that UpdatePanel is added on particular page or web part. And there is one problem with such configuration: Developer dashboard will show you information only about 1st http get request, while other asynchronous http post requests occurred from inside UpdatePanel won’t be traced.

Solution of this problem is quite simple: add DeveloperDashboard control into own separate UpdatePanel. The key element of understanding how it works is UpdatePanel.UpdateMode property. It may have 2 possible values:

  • Always (default)
  • Conditional

As documentation says when you use Always update mode which is default:

The content of the UpdatePanel control is updated for all postbacks that originate from the page. This includes asynchronous postbacks.

I.e. UpdatePanel with DeveloperDashboard will be updated even if requests will come from another UpdatePanels. If in your application you use Conditional mode, you have to explicitly call Update() method on the UpdatePanel with DeveloperDashboard when request came from other “basic” UpdatePanel. After that you will be able to trace all request, not only 1st http get request for the page. Hope this technique will help you.

Friday, July 6, 2012

Several ways to implement custom ULS logging service in Sharepoint

In Sharepoint 2010 you may implement custom logging service – inheritor of SPDiagnosticsServiceBase which will allow you to specify user friendly product name in ULS logs and add another custom behaviors. You may also use standard SPDiagnosticsService, but it will show you product name = Unknown which is not very useful. If you will search for examples, 2 popular approaches are the following:

1. singleton with static class property which creates instance of the logging service using private constructor. Example is shown in this post of Waldek Mastykarz:

   1: public class LoggingService : SPDiagnosticsServiceBase
   2: {
   3:     private LoggingService()
   4:         : base("My Logging Service", SPFarm.Local)
   5:     {
   6:     }
   7:  
   8:     private static LoggingService current;
   9:     public static LoggingService Current
  10:     {
  11:         get
  12:         {
  13:             if (current == null)
  14:             {
  15:                 current = new LoggingService();
  16:             }
  17:             return current;
  18:         }
  19:     }
  20:  
  21:     ...
  22: }

2. also uses static class property, but retrieves logging service instance from farm configuration database as shown in this example in msdn:

   1: public class LoggingService : SPDiagnosticsServiceBase
   2: {
   3:     public LoggingService()
   4:     { 
   5:     }
   6:  
   7:     public LoggingService(string name, SPFarm farm)
   8:         :base(name, farm)
   9:     {
  10:     }
  11:  
  12:     public static LoggingService Local
  13:     {
  14:         get
  15:         {
  16:             return GetLocal<LoggingService>();
  17:         }
  18:     }
  19: }

Both approaches work. Depending on your needs and priorities you may select 1st or 2nd way.

SPDiagnosticsService inherits SPPersistedObject so it is stored in serialized form in configuration database and Sharepoint deserializes it when we call GetLocal<T>() method using reflection. In order to do this it requires from logging service class 2 public constructors: parameterless and with 2 parameters (string, SPFarm) – otherwise you will get Constructor not found exception in runtime. 1st method uses constructor for creating the object, so advantage is that it is faster. Why do we need 2nd way then?

2nd way is needed if you want to make your custom logging service configurable via Central administration > Monitoring > Configure diagnostic logging. I.e. if you want to specify different trace and event severities for different categories via UI as you can do for standard categories, you should use 2nd way.

With 1st way even if you will register logging service in farm config database (by calling service.Update() method), changes in CA won’t affect your application because you will get instance via constructor each time. This instance will contain logging categories returned from its ProvideAreas method with default trace and event severities. But in CA when you select product name or particular category under it you may change these severities and it won’t affect instance created by constructor. Actually it won’t immediately affect your application even if you will try to optimize 2nd approach and will make it like this:

   1: public class LoggingService : SPDiagnosticsServiceBase
   2: {
   3:     public LoggingService()
   4:     { 
   5:     }
   6:  
   7:     public LoggingService(string name, SPFarm farm)
   8:         :base(name, farm)
   9:     {
  10:     }
  11:  
  12:     private static LoggingService current;
  13:     public static LoggingService Local
  14:     {
  15:         get
  16:         {
  17:             if (current == null)
  18:             {
  19:                 current = GetLocal<LoggingService>();
  20:             }
  21:             return current;
  22:         }
  23:     }
  24: }

In this case changes from CA will be applied only after next app pool recycling when LoggingService.current will be re-initialized.

With 2nd approach you will always get last copy of LoggingService with changes from CA. Note that if you won’t register service in config database and will try to use it, then most probably when call LoggingService.Current you will get Access denied error. It happens because internally it will try to register itself by calling SPPersistedObject.Update method and will fail while called in context of not-CA web application. This is known problem. As workaround you may call it in context of CA web app (e.g. in application layouts page). But recommended approach is to register it in feature receiver:

   1: var service = LoggingService.Current;
   2: if (service != null)
   3: {
   4:     service.Update();
   5: }

By the same way you may delete it from config database on feature deactivating event:

   1: var service = LoggingService.Current;
   2: if (service != null)
   3: {
   4:     service.Delete();
   5: }

This is all what I wanted to tell in this article. Hope it will help you when you will make choice of how to implement logging service.

Sunday, June 24, 2012

How to trigger and test daily alerts in Sharepoint

In Sharepoint it is possible to create alerts with different frequency:

  • immediate – sent immediately when next time immediate alerts job will run
  • daily – sent daily also by immediate alerts job
  • weekly – sent weekly

If you create new daily alert and want to see whether it will work or not it is not very convenient to wait 24 day until Sharepoint will sent them next time. In this post I will show several ways to trigger summary alerts and send them when you need.

Method 1. When you add a new daily alert, new row is added to the SchedSubscriptions table into Sharepoint content database. This is the key element of this method. We are interesting in the following 2 columns in this table:

  1. NotifyTime
  2. NotifyTimeUNC (NotifyTime minus 3 hours)

In these columns Sharepoint stores time when next time daily alert for particular list will be sent. So first of all determine row which corresponds to your list:

   1: SELECT * FROM SchedSubscriptions

Table contains SiteUrl, WebUrl, ListUrl columns. Using them you will be able to find needed row. Copy Id (uniqueidentifier) and execute the following SQL query:

   1: declare @s datetime
   2: declare @u datetime
   3: set @s = CAST('2012-06-24 12:00:00.000' as datetime)
   4: set @u = CAST('2012-06-24 09:00:00.000' as datetime)
   5:  
   6: update dbo.SchedSubscriptions
   7: set NotifyTime = @s, NotifyTimeUTC = @u
   8: where Id = '...'

In this example in Id you should specify value which you copied from previous query’s result, @s corresponds to NotifyTime, @u to NotifyTimeUNC (NotifyTime minus 3 hours). Time should be in past (comparing with current datetime) – only in this case Sharepoint will send daily alerts.

After that wait some time. Exact time of waiting depends on the job-immediate-alerts property which can be determined by the following command:

   1: stsadm -o getproperty -pn job-immediate-alerts -url http://example.com

for testing you can set it to 1 minute:

   1: stsadm -o setproperty -pn job-immediate-alerts -url http://example.com -pv "every 1 minutes between 0 and 59"

but after testing it is better to revert it e.g. to 5 minutes:

   1: stsadm -o setproperty -pn job-immediate-alerts -url http://example.com -pv "every 5 minutes"

So after this time if you will check SchedSubscriptions table you will see that time which you updated is increased by 1 day: in out example it will be “2012-06-25 12:00:00.000” for NotifyTime and “2012-06-25 09:00:00.000” for NotifyTimeUNC. It means that Sharepoint processed daily alert and it was sent. If everything is Ok, you or alert’s recipient should get email alert with daily summary.

Method 2. I found it in the following forum thread: SPAlertHandlerParams - not behaving correctly for daily alerts, but didn’t test it by myself. May be it will be useful for you as well:

   1: SPSite site = new SPSite("http://example.com");
   2: SPWeb web = site.OpenWeb();
   3: SPAlert alert = web.Alerts[new Guid("...")];
   4: alert.AlertFrequency = SPAlertFrequency.Daily; 
   5: alert.AlertTime = DateTime.Now.AddMinutes(1);
   6: alert.Update();

For general alerts troubleshooting I recommend the following articles: The Truth About How Daily SharePoint Alerts Actually Work, Troubleshooting Alerts. They will economy some time for you. Possibility to trigger daily alerts is very important for troubleshooting. It helped me in my work, hope it will be helpful for you as well.