Tuesday, October 8, 2024

Use nginx running in Docker container to serve php sites running on the host

In one of my previous articles I showed how to use nginx running in Docker container to serve .Net sites running on Kestrel web server on the host. In the current article I will show how to do similar thing with php sites.

So idea is the same: we have nginx running in Docker container and number of php sites running on the host and want containerized nginx to serve these sites. For example we will use Mantis (popular bug tracker) and Wordpress (popular CMS engine based on php) sites. For simplicity we will use assumption that MySQL databases used by php sites are running on the same host.

On the host php sites are served by php fpm daemon (php FastCGI process manager) which may listen for IP:port or unix sockets. One option is to use this php fpm from the host, however e.g. in case of unix sockets it will require tricky rights setups. Also since we anyway eager to move to containers it will be preferable to create lightweight php fpm container which will serve our php sites from the host. First of all we need to create Docker image based on php fpm with MySQL extension installed (so php will be able to connect to MySQL db). It can be created with the following Dockerilfe:

FROM php:7.4-fpm
RUN docker-php-ext-install mysqli pdo_mysql

On the first line we tell Docker that our image should be based on php fpm v7.4 and on the 2nd line we install MySQL extension for php so it will be able to connect to the database. Note that by default php fpm containers listen port 9000 - it will be used when we will configure nginx.conf file for our site.

Then build the image and push it to our Docker images registry:

docker build -t {php-image-tag}
docker push {php-image-tag}

On the next step we need to setup nginx and php containers with the following docker compose file:

name: myservice
services:
  nginx:
    image: nginx:latest
volumes:
- /var/www/myphpsite:/var/www/myphpsite ports: - 80:80 restart: always networks: - mynetwork extra_hosts: - host.docker.internal:host-gateway
php:
image: {php-image-tag}
volumes:
- /var/www/myphpsite:/var/www/myphpsite
restart: unless-stopped
networks:
- mynetwork
extra_hosts:
- host.docker.internal:host-gateway

Important line here is where we add a volume which points to the folder with php scripts (/var/www/myphpsite) both to nginx and php. With this line nginx container will be able to serve also static files (css, images, js, etc) from our php site and at the same time php container will have access to php scripts. Also it is important that nginx and php containers are located in the same network (mynetwork) (in this case containers will be able to communicate using containers names) and that we added host.docker.internal extra host (so php fpm container will be able to connect to the database running on the host).

Next step is to setup nginx.conf for our php site:

server {
    listen 80;
    server_name example.com;
    root /var/www/myphpsite;
    ...
    location ~ \.php$ {
        include fastcgi_params;
        fastcgi_pass myservice-php-1:9000;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;
    }
}

Here with root directive we specify that php sources are located in /var/www/myphpsite folder (remember that we added this folder to php and nginx containers volumes) and configure php fpm to process php scripts. Directive fastcgi_pass instructs nginx to execute a CGI script (php in our case) using the FastCGI protocol (which is different from proxy_pass directive used for .Net sites running on Kestrel - which sends an HTTP request to another web server). Since both containers are located in the same network we may use container name (myservice-php-1) and port number (9000) for fastcgi_pass directive.

And last but not least - since php scripts will be processed inside container we need to change host in database connection string from localhost to host.docker.internal (we can do that because added extra_host to our containers in docker compose files). Location where it is stored depends on php site. E.g. for Mantis it is stored in /.../{site_folder}/config/config_inc.php in $g_hostname variable, for Wordpress-based sites in /var/www/{site_folder}/wp-config.php in DB_HOST variable. After that your php sites will work under nginx running in Docker container.

Wednesday, September 18, 2024

Use nginx running in Docker container to serve .NET sites running on Kestrel both in host and Docker containers

In one of my previous articles I made brief introduction to Docker – containerization technology which gives more control over infrastructure. Suppose that we have number of .NET sites running on Kestrel in Linux and published via nginx to the internet like shown on the following schema (ports numbers are only for example here):

And we want to move all our infrastructure to Docker (backend, database, nginx itself, etc) so it will look like this:

On practice however this switch will take time and at some time period we will have both sites running still in the Linux host and sites running in Docker containers, and we will need to serve both types:

Since only one app may listen to 80 port we need to decide whether we want to keep nginx service running on the host or move nginx to Docker and configure it to serve both sites from the host and sites from Docker containers. In this article I will describe this last option.

Since nginx is running in Docker there won’t be many problems with hosting sites running also in Docker. Just run these sites and nginx container in the same network:

#docker-compose-nginx.yml
name: myservice
services:
  nginx:
    image: nginx:latest
    ports:
      - 80:80
    …
    networks:
      - mynetwork

and nginx will be able to resolve containers by names. I.e. if site’s container name is mysite-web-1 you may specify nginx.conf like this:

server {
	listen 80;
	listen [::]:80;
	server_name mysite.ru www. mysite.ru;
	location / {
		proxy_pass http://mysite-web-1:80;
		…
	}
}

However with sites running in host it is not that straightforward. Since nginx is running in Docker container which has own IP address we need to instruct it to which IP it should forward requests if they came for sites running on the host.

We can do that by adding special extra host host.docker.internal to nginx docker compose:

name: myservice
services:
  nginx:
    image: nginx:latest
    ports:
      - 80:80
    restart: always
    networks:
      - mynetwork
    extra_hosts:
      - host.docker.internal:host-gateway

If we will check /etc/hosts file inside nginx container we will see that host.docker.internal points to 172.17.0.1 IP address which is default IP used by Docker for host:

docker exec -it myservice-nginx-1 sh
more /etc/hosts
…
host.docker.internal 172.17.0.1

Next step is to instruct nginx to forward request to the host if it came for site running there. For doing that we need to modify nginx.conf and specify host.docker.internal with appropriate port in proxy_pass property:

server {
	listen 80;
	listen [::]:80;
	server_name mysite.ru www.mysite.ru;
    	location / {
		proxy_pass http://host.docker.internal:5000;
		…
    	}
}

However that is still not enough. If your .NET site is running on Kestrel you most probably configured to run it as a daemon via the following commands:

systemctl enable mysite.ru.service
systemctl start mysite.ru.service

where mysite.ru.service file contains the following line:

Environment=ASPNETCORE_URLS=http://localhost:5000

With our current configuration if we will open shell inside our nginx container and try to reach our site via curl we will get Connection refused:

docker exec -it myservice-nginx-1 sh
curl -X GET http://host.docker.internal:5000
Connection refused

The problem here is that Kestrel is currently listening only for localhost IP (127.0.0.1). We can check it on the host using the following command:

netstat -tulpn | grep 5000
tcp        0      0 127.0.0.1:5000          0.0.0.0:*               LISTEN      22496/dotnet
tcp6       0      0 ::1:5000                :::*                    LISTEN      22496/dotnet

but request from nginx container goes to 172.17.0.1. To solve that we need to modify /etc/systemd/system/mysite.ru.service and add http://172.17.0.1:5000 to ASPNETCORE_URLS after semicolon:

Environment=ASPNETCORE_URLS=http://localhost:5000;http://172.17.0.1:5000

and reload the service:

systemctl stop mysite.ru.service
systemctl daemon-reload
systemctl start mysite.ru.service
systemctl status mysite.ru.service

After that check that it listen 5000 port also on 172.17.0.1:

netstat -tulpn | grep 5000
tcp        0      0 172.17.0.1:5000         0.0.0.0:*               LISTEN      22496/dotnet
tcp        0      0 127.0.0.1:5000          0.0.0.0:*               LISTEN      22496/dotnet
tcp6       0      0 ::1:5000  

And now if we go inside container shell and try to reach the site connection should be successful:

docker exec -it myservice-nginx-1 sh
curl -X GET http://host.docker.internal:5000
Connection successful

which means that nginx is now able to serve sites running both in containers and on the host itself.

Tuesday, August 27, 2024

Nginx uses 1st available server configuration when domain name not resolved

If you host several sites on nginx (which means that you also have several domain names pointing to your server's IP address) then you may face with the following issue. Suppose that we have 3 domain names pointing to our server's IP with running nginx:

example1.com
example2.com
example3.com

Sites example1.com and example2.com are configured in nginx, but there is no example3.com. The problem is that by default nginx will still proxy example3.com to 1st available server config (depending in which order they are defined in nginx.conf file, let's say example1.com in our example) which may be quite confusing because we requested example3.com but see content of example1.com in browser window (in browser address bar we will still see example3.com).

In order to avoid this behavior we need to add default server configuration to nginx.conf which will return 404 like that:

server {
	listen 80 default_server;
	server_name  _;
	return 404;
}

Now if we will try to open example3.com we will see 404 page returned by nginx:

which is more clear and expected behavior from my point of view.

Wednesday, June 19, 2024

Obfuscar: how to stop build of .NET app with error when obfuscation failed

In my previous post I showed how to obfuscate ASP.Net Core web app using Obfuscar. There was however one problem: if obfuscation will fail it will be ignored by VS by default. In the current post I will show how to make such errors more visible i.e. how to cause build to fail if obfuscation failed.

First of all we need to move obfuscation logic to PowerShell file (postpublish.ps1 in this example)  located in project folder and specify it in post publish event:

  <Target Name="PostBuild" AfterTargets="Publish" Condition=" '$(Configuration)' == 'Release' ">
    <Exec Command="pwsh -ExecutionPolicy Unrestricted $(ProjectDir)postpublish.ps1 -targetFramework $(TargetFramework) -buildConfig $(Configuration) -projectDir $(ProjectDir) -publishDir $(PublishDir)" />
  </Target>

(if you are interested why pwsh command is used instead of powershell check the following article Use PowerShell in Visual Studio build events when build Docker image for .Net app)

It is up to you to implement actual obfuscation logic, in my example I will use cross platform Obfuscar dotnet global tool:

$s = dotnet tool list -g | Out-String
if (!$s.Contains("obfuscar.globaltool")) {
	Write-Host "Obfuscar global tool not installed. Installing it..."
	& dotnet tool install --global Obfuscar.GlobalTool
} else {
	Write-Host "Obfuscar global tool is already installed"
}

$targetDir = [System.IO.Path]::Combine($projectDir, $publishDir)
$obfuscarXmlPath = [System.IO.Path]::Combine($targetDir, "obfuscar.xml")
Write-Host "Obfuscating assemblies..."
[void]($output = & obfuscar.console $obfuscarXmlPath)
$output
if ($LASTEXITCODE -ne 0) {
	Write-Host "postbuild.ps1: General error Code: Obfuscation failed"
	return
}

Here we first check whether obfuscar.globaltool already installed, if not we install it. Then we run obfuscar.globaltool with specified config file (obfuscar.xml) and check exit code: if it is not 0 (0 code means success) we write the following message to the output:

 postbuild.ps1: General error Code: Obfuscation failed

That is actual trick: message has special format recognized by Visual Studio. In this case VS will understand that error happened in post publish event and will show error in the build output. More details can be found here: Emitting custom warnings/errors in Visual Studio custom build events. With this approach you will see build errors if obfuscation failed.

Wednesday, June 5, 2024

Use PowerShell in Visual Studio build events when build Docker image for .Net app

Imagine that we have .NET application and created Docker file for it which often contains build and runtime stages (in this example I use .Net6 but it is also valid for higher versions):

# build stage
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
COPY ...
RUN dotnet restore ...
RUN dotnet publish ...

# runtime stage
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS final
...

and that our VS solution contains build steps which use PowerShell (assume that it is cross platform PowerShell v.7.x). These steps will run during "dotnet publish" step on build stage. You may face with the following error:

powershell command not found

Quick way to fix it is to use "pwsh" command instead of "powershell". It will work because PowerShell became part of .Net SDK docker images since 3.0 preview like described in the following article: Installing PowerShell with one line as a .NET Core global tool. Note also that .Net SDK images are Linux based where "pwsh" command is usual for running PowerShell.

If because of some reason you don't want to change "powershell" to "pwsh" in the code (e.g. if you need to build apps both on Windows and Linux) you may use the following approach: in Docker file add the following command:

RUN ln -s pwsh /usr/bin/powershell

It will add symlink "powershell" which in turn will run pwsh. After that your powershell commands in build events will work both on Windows and Linux including Docker images.

Sunday, April 7, 2024

Introduction to Docker: what is Docker and what problems it solves

Software development is not only about writing the code. From my experience I may tell that writing code itself is not that difficult (get some data from database, show it in UI, process this data, save back to DB, call some external services, etc - tasks are quite common). But that is not all. We also need to ensure that non-functional requirements are met. I.e. that:

  • app handles errors/exceptions correctly
  • works safely (authentication/authorization/policies etc)
  • has good performance
  • scales well for increased loads
  • allows to quickly find the cause of an error/problem in production environment
  • supports updates (preferably without downtime)
  • not expensive for maintenance and support during its life cycle
  • fault-tolerant to hardware failures
  • etc.

And if we will put it this together story becomes not that easy. Let's look at an example. Assume that we have some system which consists of several components which run on the same single server:

How to ensure scalability and fault tolerance of this system? The first thing that comes to mind is to move application components from one physical server to virtual machines, run then on multiple physical servers and provide orchestration between them (setup cluster):

If one VM or server will fall, the rest will continue to work. We won't go here into details how to organize the orchestration of such a cluster (routing of http requests with network load balancer, replication of database servers, distributed caching, logging, etc.) - it is out of scope of the current article. Here we just need to understand the problem so let's continue.

How we just saw with such a cluster scaling and fault tolerance of the system got improved. However if we will look inside virtual machines we'll find that over situation with components and dependencies didn't change much. I.e. different components use different dependencies and it is possible that one application needs certain version of some library, while another needs a different version of the same library. I.e. we need to keep several different versions of the same library in one system.

Also how to update such cluster? Upload updates to each VM and update all components one by one. Yes it is possible to do it that way but what will happen when number of components will grow and number of environments where these components need to be run will also increase? In this case dependencies for different applications are accumulated and we get so-called "matrix of hell":

Maintenance cost of such system will increase as much as many components and environments will be added.

How we can improve that? E.g. if we would be able to package component/service of our distributed system along with all the necessary dependencies, configuration, environment variables, etc. into "something" that would allow us to run this component/service in any environment on any OS (on development machine, standalone server, in cluster, on production stand), then we could simply transfer this "something" between environments:

Here containers and Docker come to the scene. When we talk about containers, first thing which we may imagine is a huge barge carrying cargo containers:

In context of software development this image is a pretty good analogy. As we will see below Docker image is kind of cargo container which contains components and all its dependencies inside. That's why Docker logo looks like a whale carrying cargo containers on its back:

The term "container" came from UNIX-based operating systems. Originally term "jail" was used, but "container" has become the preferred term since 2005 with the release of Sun Solaris 10 and Sun Containers. Container is isolated runtime environment for an app that prevents that app from accessing resources outside its container (allowing access only to those resources that are explicitly allowed).

However manual creation and configuration of containers is quite complex and error-prone process. Docker is used to solve this problem. In context of Docker containers are child processes of Docker background service (Docker daemon). Any software running with Docker runs inside a container.

Containers are launched from images. As it was mentioned above Docker image is good analogue of the cargo container. Images are stored in repositories, which in turn are organized into registries. The most well-known public image registry is DockerHub. Also it is possible to run own private local image registry within the company.

Docker consists from several parts:

  • CLI tool
  • background service (daemon)
  • set of remote services (DockerHub, JFrog, etc.)

Together they simplify management of containers and allow to build own container management infrastructure:

Docker is open source. Although it came from the Linux world, but also runs on Windows (on top of Hyper-V or WSL2) and MacOS. Note however that although it is quite easy to run Linux containers (containers with Linux runtime) in Docker running in Windows (as well as Windows containers of course) since under the hood WSL2 is a lightweight virtual machine with a real Linux Kernel, but running a Windows container in Docker on Linux is not that easy:

There are solutions for that but not that straightforward (e.g. you may run Windows Server Core OS inside VirtualBox, which in turn runs inside Docker container on Linux or use Wine shell). Also licensing issues should be solved since Windows is not free.

Note that container is not the same as virtual machine:

Virtual machines:

  • launch own OS in which the installed software runs
  • require more resources (an average PC can only run a few VMs)
  • start slower
  • support snapshots which is good. But snapshots have own problems: large size, issues with diffs tracking and versioning
  • from one set of VMX/VMDK files only one VM can be launched.

Containers from other side:

  • run on the same host OS kernel
  • require less resources (on an average PC you can run many containers at the same time)
  • start within a few seconds
  • changes are added as an additional layer in union file system: possible to track changes and view history
  • it is possible to start many containers from one image.

Now with this knowledge we may solve matrix of hell mentioned above using Docker containers:

But there is new question: how to manage this matrix? 🙂 Here we come to containers orchestration technologies like Kubernetes, Docker Swarm, etc. This topic is out of scope of the current article (plan to write about that later as well).

And at the end example of how Docker may help developers at everyday work. As developer you may need to run different versions of some database engine simultaneously in order to test functionalities on these versions. Docker is perfect tool for that. E.g. if you run PostgreSQL 16 on you host OS and want to test code on older PostgreSQL 10 you need only 2 commands for that:

docker pull postgres:10
docker run -d -p 5432:5432 --name postgres10 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres -it postgres:10

Here I used host port 5432 because don't have any Postgres version running on my host (to be true with Docker I don't want to install any db engines to my host anymore ðŸ™‚ ) i.e. this port is not busy. Otherwise just use different host port and map it to internal port 5432 used by Postgres inside container e.g. "-p 6432:5432"). After that you may connect to db engine and work with it as usual:

 

That's all I wanted to write about Docker here. Hope that this information will help you to understand this technology, will motive to learn it further and use it in your work.

Wednesday, March 20, 2024

Inject class to Python module dynamically

As you probably know in Python after you import some module you may call functions defined in this module. But if you work with more complex scenarios (e.g. when imported module is provided automatically by infrastructure (i.e. you don't control it) or when classes are generated in runtime using Python meta classes) you may face with situation when some class is missing in imported module. In this case Python will show the following error:

AttributeError: module 'SomeExternalModule' has no attribute 'SomeAPI'

(in this example we assume that module name is SomeExternalModule and class name is SomeAPI). Of course if you need functionality defined in missing class you have to resolve this issue properly to ensure that missing class will exist in imported module. But if it is not critical and you just want to pass through these calls you may use trick with injecting class to module dynamically. E.g. class with single static method (method without "self" first argument) can be added to module like that:

def Log(msg):
    print(msg)

SomeExternalModule.SomeAPI = type("SomeAPI", (object, ), {
    "Log": Log
})

Here we injected class SomeAPI to module SomeExternalModule. Class contains one method called Log which just prints message to console. In addition to static methods we may also add instance methods using similar technique - just add "self" first argument to the method in this case.

After that if you will run code which relies on SomeExternalModule.SomeAPI() calls error should gone.

Sunday, March 10, 2024

Cinnamon: first thing you may want to install when switched from Windows to Linux

If you worked in Windows and then switched to Linux it may be painful in beginning since user experience is a bit different (and here we are talking about graphical UX in Linux, not command line). E.g. this is how RHEL8 (Red Hat Enterprise Linux) with default GNOME desktop environment looks like:

Yes, there are windows also but no minimize/maximize icons nor taskbar. The good news is that there is more Windows-like desktop environment available called Cinnamon. Here you may find instructions how to install it on RHEL (you may find more complete list also here). After installation and reboot you will be able to select Cinnamon from available desktop environments list on login screen:


And system will look more familiar for those who worked with Windows:


There will be minimize/maximize icons, taskbar and other familiar things. Hopefully with them transition from Windows to Linux will go smoother.

Tuesday, February 6, 2024

Fix problem with Git client for Linux which asks for credentials on every push with installed SSH key

Recently I faced with the problem that Git client for Linux (CentOS) always asked for user credentials on every push even though SSH key was installed. In general SSH key is installed exactly for avoiding that. So what went wrong?

Let's briefly check whole process. First of all we need to install SSH key pair. On Linux it can be done with ssh-keygen tool. If you don't want to enter passphrase on every push just click Enter on each step. By default it will save public/private key files (id_rsa.pub and id_rsa) into ~/.ssh folder (where ~ means local user folder - usually under /home/local/...). After that copy content of public key file id_rsa.pub and go GitHub > your profile Settings > SSH and GPG keys > SSH keys and paste content there:

Installation of SSH key has been completed. But if you will try to clone some repository and try to push changes there (assuming that you have write permission in this repository) git may still ask for username/pw credentials and every push. As it turned out it depends on how repository was cloned. There are several ways to clone repositories: HTTPS, SSH and GitHub CLI (HTTPS tab goes first in UI)

Mentioned problem with credentials appears when repository is cloned via HTTPS. Solution here is to clone repository with SSH instead:

After that git should not ask you for credentials anymore.

Tuesday, January 9, 2024

Verify JWT tokens with EdDSA encryption algorithm

In my previous posts of this series I showed how to generate EdDSA private and public keys pair and how to sign JWT tokens using private EdDSA key. In this last post of the series I will show how to verify signed JWT token with public key.

Let's remind that EdDSA key pair may look like that in JSON format:

{
	"kty": "OKP",
	"alg": "EdDSA",
	"crv": "Ed25519",
	"x": "...",
	"d": "..."
}

where "x" property is used for public key and "d" is for private key. Private key (d) was used for signing. For verification we need to use public key (x).
For token validation we will use JsonWebTokenHandler.ValidateTokenAsync() method from Microsoft.IdentityModel.JsonWebTokens. Here is the code which decodes token:

string token = ...;
var jwk = ...; // get EdDSA keys pair
var pubKey = new EdDsaSecurityKey(new Ed25519PublicKeyParameters(Base64UrlEncoder.DecodeBytes(jwk.X), 0));
pubKey.KeyId = jwk.KeyId;
var result = await new JsonWebTokenHandler().ValidateTokenAsync(token, new TokenValidationParameters()
{
  ValidIssuer = JwtHelper.GetServiceName(jwk),
  AudienceValidator = (AudienceValidator) ((audiences, securityToken, validationParameters) => true), // or whatever logic is needed for verifying aud claimm
  IssuerSigningKey = (SecurityKey) pubKey
});
if (!result.IsValid)
  throw result.Exception;
json = JWT.Payload(token);

Here we use EdDsaSecurityKey class from ScottBrady.IdentityModel.Tokens.
If public key matches private key which was used for signing then result.IsValid will be true (otherwise code will throw exception). At the end we call JWT.Payload() from jose-jwt to get JSON token representation (from which we may get needed claims and other data).

With these techniques you may generate EdDSA keys, sign tokens and verify them. Hopefully information in these posts will help you.