Wednesday 27 December 2017

Accept header based response formatting in asp net core

To be able to return responses from controller's actions based of on the returned type dot net core uses formatters. I've cobbled together an example how boilerplate code can look. This is pretty much similar to what you can find on msdn with some extra flair.

First, I've implemented the following 2 adapters (bear in mind there's a built-in xml serializer/formatter in .net core, I've created one for the sake of an example). They perform formatting on the list of cities passed in the form of dtos.

public class XmlCityAdapter : ICityTextAdapter
{
    public AdapterOutputFormat AdapterOutputFormat => AdapterOutputFormat.Xml;

    public string GetFormatted(IEnumerable<CityDto> cities)
    {
        var citiesElement = cities.Select(c =>
            new XElement("City",
                new XAttribute("Name", c.CityName),
                new XAttribute("Population", c.Population)));
        
        var doc = new XElement("Cities", citiesElement);
        return doc.ToString();
    }
}

and

public class CsvCityAdapter : ICityTextAdapter
{
    public AdapterOutputFormat AdapterOutputFormat => AdapterOutputFormat.Csv;

    public string GetFormatted(IEnumerable<CityDto> cities)
    {
        var stringBuilder = new StringBuilder();
        foreach (var city in cities)
        {
            stringBuilder.AppendLine($"{city.CityName};{city.Population}");
        }
        return stringBuilder.ToString();
    }
}

There are many, many ways to perform such serialization. Here's the interface behind these classes:

public interface ICityTextAdapter
{
    AdapterOutputFormat AdapterOutputFormat { get; }

    string GetFormatted(IEnumerable<CityDto> cities);
}

Next step was to add some boilerplate base class for the formatters. The formatter needs to derive from one of the base classes offered by core libraries. My choice was to use the OutputFormatter.
For our response formatter to work we need to override the following methods:
  • CanWriteType method (to asses, whether the passed in type should be formatted or not)
  • WriteResponseBodyAsync to do the actual formatting
Another method which we could override is WriteResponseHeaders. It is not really needed though, because the headers can be safely expanded in the WriteResponseBodyAsync method.
The base class will implement the above, it will also use ServiceProvider, which will allow us to use the container to get our adapter classes.

Here's the base class:

public abstract class BaseFormatter<TFormattedTypeTFormatter> : OutputFormatter
{
    private const string ContentDispositionHeader = "Content-Disposition";

    protected override bool CanWriteType(Type type)
    {
        return base.CanWriteType(type) && type.IsAssignableFrom(typeof(TFormattedType));
    }
    
    public override async Task WriteResponseBodyAsync(OutputFormatterWriteContext context)
    {
        if (!(context.Object is TFormattedType contextObject))
        {
            return;
        }

        var response = context.HttpContext.Response;
        var serviceProvider = response.HttpContext.RequestServices;
        var availableAdapters = serviceProvider
            .GetServices(typeof(TFormatter))
            .Cast<TFormatter>();                

        if(availableAdapters != null)
        {
            response.Headers.Add(ContentDispositionHeader, $"attachment;filename={GetFileName(contextObject)}");
            var formattedResult = GetFormattedResult(availableAdapters, contextObject);
            await response.WriteAsync(formattedResult);
        }             
    }

    protected abstract string GetFileName(TFormattedType contextObject);

    protected abstract string GetFormattedResult(
        IEnumerable<TFormatter> availableAdapters, 
        TFormattedType contextObject);
}

TFormattedType is the type, which we will validate against to check, if we can perform the formatting. If CanWriteType returns false other methods from the class won't be called.
TFormatter is the interface, which implementations of we will resolve from the container.

Given the formatters can't get dependencies through the constructor we need to get the container through the HttpContext. We will try and resolve all the implementation of our TFormatter type from it.

Another functionality of this class is to attach the disposition header. In this case each deriving class will set its disposition header as a filename passed through the overridable method.
Last thing to note is that the use of adapters have also been delegated to the deriving classes.

An example of the csv formatter is as follows, as you can see it passes List<CityDto> as the type, which is returned by the controller's action and which will get formatted. It also passes the interface ICityTextAdapter used for resolving the adapter implementations:

public class CsvCityFormatter : BaseFormatter<List<CityDto>, ICityTextAdapter>
{
    public CsvCityFormatter()
    {
        SupportedMediaTypes.Add(MediaTypeHeaderValue.Parse("text/csv"));
    }
    
    protected override string GetFileName(List<CityDto> contextObject)
    {
        return "Cities.csv";
    }

    protected override string GetFormattedResult(
        IEnumerable<ICityTextAdapter> availableAdapters, 
        List<CityDto> contextObject)
    {
        var csvAdapter = availableAdapters.FirstOrDefault(p => p.AdapterOutputFormat == AdapterOutputFormat.Csv);
        if (csvAdapter == null)
            throw new NullReferenceException(nameof(csvAdapter));
        return csvAdapter.GetFormatted(contextObject);
    }
}

The actual implementation of the formatter adds the supported media type as well. Uses the abstract methods to:
  • pass the filename, which will be set as disposition header
  • chooses and uses the adapter
Last piece to note is the registration of the formatter in the Startup class:

services.AddMvc(options =>
{                
    options.OutputFormatters.Add(new XmlCityFormatter());
    options.OutputFormatters.Add(new CsvCityFormatter());
});

What could be further added is:

  • null/error checking
  • logging
  • selection of the adapter could be based on another generic type, instead of an enum - which would make it much more robust and solid.

The code sample will be added on github soon.

Sunday 26 November 2017

.net core and entity framework integration tests with in memory database

    Integration testing is really useful, and any tool or framework which makes it easier will be welcomed. For .net core we are given the TestServer class and the entity framework's in memory database setup. I've built a small example, with a useful base class, which I'll present in this post.

Here's the whole code sample, including a small API in .net core 2,  and integration tests prepared using xUnit:
https://github.com/simonkatanski/coreintegrationtests

In the following I'm not going into basic knowledge of Entity Framework, or the setup of .net Core API. There's plenty articles about it elsewhere. This post shouldn't also be taken as an example of good design, I'm trying to make is as small and self-contained as possible.

I'll start with showing a simple DbContext which I want to "mock" with an "in-memory" version in my tests. A small db containing some exemplary city data.

public class CityContext : DbContextICityContext
{
    public DbSet<City> Cities { getset; }
    
    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
        => optionsBuilder.UseSqlite("Data Source=Cities.db");
}

It is used directly in our controller (not a good thing IRL ;)) like this:

[Route("api")]
public class CityController : Controller
{
    private readonly ICityContext _cityContext;

    public CityController(ICityContext cityContext)
    {
        _cityContext = cityContext;
    }
    
    [HttpGet("cities")]
    public IEnumerable<string> Get()
    {
        var cities = _cityContext.Cities.ToList();
        if (cities.Any())
        {
            return cities.Select(c => c.CityName);
        }
        return Enumerable.Empty<string>();
    }

    [HttpPost("cities")]
    public void Post([FromBody]City city)
    {
        _cityContext.Cities.Add(city);
        _cityContext.SaveChanges();
    }
}

As you can see it is minimal. Having created our small API we do want to create integration tests for the implemented actions. Now after creation of the integration tests project, we can create our "in-memory" version of CityContext.

public class InMemoryCityContext : CityContext
{
    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
        => optionsBuilder.UseInMemoryDatabase(Guid.NewGuid().ToString());
}

We derive from the context and override the OnConfiguring method. Bear in mind this might work slightly differently, if your DbContext is configured in a different way. Now to make it possible to swap our CityContext for the InMemoryCityContext implementation we need to prepare by using appropriate container registration.

In our case (we are using .net core built-in Dependency Injection IOC framework) we will use the following registration method, it will not register another ICityContext implementation, if there's already one existing in the container:

services.TryAddScoped<ICityContextCityContext>();

This means we need to register "in-memory" version before the actual API registration takes place. Now other containers are much more robust and performing a registration overriding is much easier using them. I'm going to focus on the built-in container.

Microsoft provides the following package for integration testing: "Microsoft.AspNetCore.TestHost". It contains the TestServer class, which can use our API's Startup class and set up an in-memory API for testing.

I've built a wrapper for it, to make it easier to manage the mocked DB context.

public class ApiTestServer : IDisposable
{
    private readonly TestServer _server;
    public InMemoryCityContext CityContext { get; }

    /// <summary>
    ///     A wrapper around the TestServer, which also contains the 
    ///     EF contexts used in the API.
    /// </summary>
    public ApiTestServer()
    {
        _server = new TestServer(new WebHostBuilder()
            .UseStartup<Startup>()
            .ConfigureServices(RegisterDependencies));

        CityContext = new InMemoryCityContext();
    }

    public RequestBuilder CreateRequest(string path)
    {
        return _server.CreateRequest(path);
    }
    
    /// <summary>
    ///     Register dependencies, which differ from the ordinary setup of the API. 
    ///     For the registration here to work, you need to use the TryAdd* versions
    ///     of container registration methods.
    /// </summary>
    private void RegisterDependencies(IServiceCollection service)
    {
        service.AddSingleton<ICityContextInMemoryCityContext>(serviceProvider => CityContext);
    }

    public void Dispose()
    {
        CityContext?.Dispose();
        _server?.Dispose();
    }
}

As you can see above the class exposes the Context outside, so that we can both set our test data in the context, and validate the changes introduced by the API. It also exposes the RequestBuilder, which allows us to send http requests. ApiTestServer registers the context as a singleton, the context is later used thanks to the TryAddScoped registration of the original base context.

An example created with xUnit:

[Fact]
public async Task GivenNonEmptyDb_ThenExpectCityToBeAdded()
{
    using (var server = new ApiTestServer())
    {
        //Arrange
        server.CityContext.Add(new City { CityName = "Bristol", Population = 100000, Id = 0 });
        server.CityContext.SaveChanges();

        var cityToAdd = new City { CityName = "Berlin", Population = 100000 };
        var stringData = JsonConvert.SerializeObject(cityToAdd);
        var request = server.CreateRequest("/api/cities")
            .And(c => c.Content = new StringContent(stringData, Encoding.UTF8, "application/json"))

        //Act
        var response = await request.PostAsync();

        //Assert
        Assert.True(response.StatusCode == HttpStatusCode.OK);
        Assert.True(server.CityContext.Cities.Count() == 2);
    }
}

In the example we:

  • create our test server
  • setup the context (state of the db before the API call)
  • prepare the request
  • call the API with the request
  • assert the context state

This way, in a real-life case, by adding all our contexts into the ApiTestServer class we can prepare a complete setup for our DB and our integration tests.

UPDATE:
I've explicitly set the ID to 0 for the entity added to the context prior to the http call. Obviously that's the default value for that parameter - but in case you were using some special data generator like Fizzware's NBuilder, you might forget about setting it to 0 explicitly and struggle to find the reason why EF is not assigning a new value to the newly added entity. For the Entity Framework's primary key's integer value generator to generate a new value it has to have 0 assigned, otherwise it'll be omitted from the generation.

Friday 17 November 2017

.net core dependency injection container and decorator pattern registration

I've spent some time with Dependency Injection container, which is the default out of the box IOC implementation, which comes with .net core. It's fast and lightweight.
You can see the performance comparison here:
http://www.palmmedia.de/blog/2011/8/30/ioc-container-benchmark-performance-comparison

It is however not very extensible. It's pretty easy to switch to something else, but if you really want to stay with the built in IOC you have to invest some time into extending it. The two classes which are the cornerstone of this IOC ServiceProvider and ServiceCollection. One of the issues I've had with it, was implementation of a decorator pattern.
http://www.dofactory.com/net/decorator-design-pattern

The following implementation, which I've had created is small and lightwaight. But it is also quite slow. I've omitted the null checks and most of the validation to make it more concise. At the end I'll note what could be done to make it faster and potentially more robust.

servicesCollection.AddTransient<IDataSourceDataSource>();
servicesCollection.AddDecorator<IDataSourceDataSource2>();

AddTransient is just an ordinary type registration method of the .net core IOC. AddDecorator looks as follows:

public static void AddDecorator<TServiceTDecorator>(this IServiceCollection services,
    ServiceLifetime lifetime = ServiceLifetime.Scoped)
    where TService : class
    where TDecorator : class
{
    var serviceDescriptor = new ServiceDescriptor(
        typeof(TService),
        provider => Construct<TServiceTDecorator>(provider, services), lifetime);
    services.Add(serviceDescriptor);
}

One of the ways of extending how the types are resolved is by adding a factory based registration as above. All the container's methods are based on service descriptors, and we are using on the available ones. Unfortunately, factory registrations are not generic, and hence they don't store the resolved type in the service collection - it would be much easier if that was not the case.

private static TDecorator Construct<TServiceTDecorator>(IServiceProvider serviceProvider,
    IServiceCollection services)
    where TDecorator : class
    where TService : class
{
    var type = GetDecoratedType<TService>(services);
    var decoratedConstructor = GetConstructor(type);
    var decoratorConstructor = GetConstructor(typeof(TDecorator));
    var docoratedDependencies = serviceProvider.ResolveConstructorDependencies(
        decoratedConstructor.GetParameters());
    var decoratedService = decoratedConstructor.Invoke(docoratedDependencies.ToArray()) 
        as TService;
    var decoratorDependencies = serviceProvider.ResolveConstructorDependencies(
        decoratedService, 
        decoratorConstructor.GetParameters());
    return decoratorConstructor.Invoke(decoratorDependencies.ToArray()) as TDecorator;
}

The factory method takes in the generic type TDecorator of the decorator service and the TService type of the interface it implements.
First, we need to find the decorated type, which is already registered in the container's service collection.

private static Type GetDecoratedType<TService>(IServiceCollection services)
{
    if (services.Count(p => 
        p.ServiceType == typeof(TService&& 
        p.ImplementationFactory == null> 1)
    {
        throw new InvalidOperationException(
            $"Only one decorated service for interface {nameof(TService)} allowed");
    }

    var nonFactoryDescriptor = services.FirstOrDefault(p => 
        p.ServiceType == typeof(TService&& 
        p.ImplementationFactory == null);
    return nonFactoryDescriptor?.ImplementationType;
}

To find the correct implementation we need to find the one which implement the interface TService and which doesn't have a factory based registration (because only the decorator class can have such).
Then we return the type.

private static ConstructorInfo GetConstructor(Type type)
{
    var availableConstructors = type
        .GetConstructors()
        .Where(c => c.IsPublic)
        .ToList();

    if (availableConstructors.Count!= 1)
    {
        throw new InvalidOperationException("Only single constructor types are supported");
    }
    return availableConstructors.First();
}

Then we create constructors for both the decorator and the decorated services. We only support single public constructor classes. With the constructor info we have access to the number and types of all constructor parameters for these 2 classes. We resolve these dependencies using the ServiceProvider. In decorator's case we inject the constructed decorated service's instance. Here is the example of the extension methods for the ServiceProvider.

public static List<object> ResolveConstructorDependencies<TService>(
    this IServiceProvider serviceProvider,
    TService decorated,
    IEnumerable<ParameterInfo> constructorParameters)
{
    var depencenciesList = new List<object>();
    foreach (var parameter in constructorParameters)
    {
        if (parameter.ParameterType == typeof(TService))
        {
            depencenciesList.Add(decorated);
        }
        else
        {
            var resolvedDependency = serviceProvider.GetService(parameter.ParameterType);
            depencenciesList.Add(resolvedDependency);
        }
    }
    return depencenciesList;
}

public static List<object> ResolveConstructorDependencies(
    this IServiceProvider serviceProvider,
    IEnumerable<ParameterInfo> constructorParameters)
{
    var depencenciesList = new List<object>();
    foreach (var parameter in constructorParameters)
    {
        var resolvedDependency = serviceProvider.GetService(parameter.ParameterType);
        depencenciesList.Add(resolvedDependency);
    }
    return depencenciesList;
}

Having materialized a collection of dependencies, we can create an instance of the Decorator class with previously injected Decorated instance, using the Invoke method of the constructor.

decoratorConstructor.Invoke(decoratorDependencies.ToArray()); 

The exact sources are available in the following github repository:
https://github.com/simonkatanski/dependencyinjection.extensions

The things to improve:
- null checks
- more validation
- constructor lookup (instead of creating constructor each time we could just add it to some sort of dictionary lookup during registration and then just reuse it on each call
- support for multiple constructors (this container has it by default, if you venture into its code with reflection)
- implement registrations out of order
- find a way to have factory based decorator registration

Sunday 29 October 2017

Visual Studio 2017 Installation "A product matching the following parameters cannot be found"

          While installating Visual Studio 2017 Community update I've got a blue screen. The installation became corrupt and I had to reinstall. The moment I've launched the reinstall I started getting errors one after another.

  • while starting the installer:
    "A product matching the following parameters cannot be found:
    channelId: VisualStudio15.Release
    productId: Microsoft.VisualStudio.Product.Community"
  • while choosing a package to install:
    "Error: Sorry installation failed. Please Try Again.
I've gone through the following steps, which helped me in resolving this issue:
  • uninstalled Visual Studio (and other Visual Studio installations)
  • run C:\Program Files (x86)\Microsoft Visual Studio\Installer\InstallCleanup.exe -full"
  • removed all the "C:\Program Files (x86)\Microsoft Visual Studio" folders
  • removed "C:\Program Data\Microsoft\Visual Studio\ folder
  • restarted Windows
Installation run perfectly fine afterwards - I'm not sure thought which of these steps were required. I suppose the only thing I could have tried was not removing all Visual Studio installations before trying the other options.

UPDATE:
Also worth mentioning, I had to do a "Repair" after that first successful installation due to issues with launching "Extensions and Updates". Check, if it works for you after you've managed to install VS.

Monday 16 October 2017

Migrating to .net core

     I haven't seen many such posts elsewhere and thought about doing a small write-up on my latest work with .net core. I had a couple of approaches to .net core, all stopped due to different issues with missing libraries and certain volatility in development. I was one of many waiting for version 2.0 to arrive to take another dive into core.

Here's the list of little things to remember about and potential issues which one can stumble upon when porting an app to .net core. I won't be going into details of each, just some food for thought:

- ConfigurationManager is gone - library that comes with .core brings IOptions<T> interface, which allows for mapping settings.json file to POCO classes. All places, which use ConfigurationManager will have to get rewritten with either a cusom configuration framework or IOptions<T>.

Advantages:
  1. IOptions can be easily injected into classes using the DI scaffolding available out of the box
  2. Given it uses POCOs with nicely mapped properties along with lists and dictionaries, all strongly typed
  3. It reloads the values on change in the settings file
Disadvantages:
  1. We are bound to the IOptions interface, which seems to be an unneeded dependency
  2. Application reads the underlaying POCO when IOptions value is accessed - which means any errors will be delayed until that time
  3. In case we don't need the reloading functionality IOptions doesn't seem that useful, for the extra boilerplate it brings to the table
UPDATE:
You can actually download the following nuget package to get ConfigurationManager back:

- DirectoryServices namespace has not been migrated - this library is required for LDAP connectivity. It is quite often used in enterpraise ecosystems and might be a blocker for many to be able to move to .net core.

Potential solutions:
  1. I've checked the only open source LDAP solution, which supports .net standard: https://github.com/dsbenghe/Novell.Directory.Ldap.NETStandard                                            It's basic functionality works. Unfortunately it does not support SSL which is a must when you want to have a certficate based authentication.
  2. Write a separate proxy API for LDAP connectivity in C# or another language to be called from your .net core project - whichever is easier to route your LDAP calls.
Here's discussion on the work being done and migration of this feature to .net core in the future:

- Windows based functionalities are not available - I know this sounds like a no-brainer, but it took a bit before it dawned on me :). Anyone who is planning to port should take a serious look through the code base. Check if there are no windows registry usages hiding somewhere in the plumbing of the app.

- Rijndael encryption with 256-bit block size is not supported - in case your code uses RijndaelManaged to perform encryption with 256-bit block size, you'll have to find another implementation - such a BouncyCastle.

- IHttpModule and IHttpHandler pipeline changed into .net core middleware - it has slightly changed how HTTP request processing pipeline works in .net core. There are no IHttpModule or IHttpHandler interfaces. The new guy in town is called .net core middleware. It's pretty easy to migrate modules to the new middleware. On top of it the DI scaffolding in .net core allows to inject dependencies to both constructor (singletons only given lifetime of middleware) and into the Invoke method - which is our request handling method. Read more below:
https://docs.microsoft.com/en-us/aspnet/core/fundamentals/middleware?tabs=aspnetcore2x

Lazy Loading and edmx missing from EF Core - lazy loading hasn't been implemented yet in Entity Framework for .net core along with any feature based on .edmx.
Why lazy loading is not always the best idea can be read here in detail:
https://ardalis.com/avoid-lazy-loading-entities-in-asp-net-applications
It still is an important tool in any EF's user toolbelt, discussion on it and an open source alternative can be found here:
https://github.com/aspnet/EntityFrameworkCore/issues/3797

UPDATE:
In addition to missing lazy loading, at least for now certain functionalities are not available through DataAnnotations or convention and require explicit entry in code. Among others composite keys. Hence when finding some features missing check, whether some other EF configuration approach is available.

Saturday 12 August 2017

Squashing range of old commits in git

        Changing the commit history of your branch is usually not the best idea. It should be done only in very specific cases. First of all by changing the history you will cause a great headache for whoever has changes in a branch branched off of the branch you are changing. My case was very specific. I had to squash old commits in the develop branch, which was used earlier by pretty much just a single developer. The effort was to create a demo application, which prompted the thought to just hide all these commits behind one "demo preparation" commit. Here are the steps I've taken to perform the squash on selected commits of the branch. I had a freshly installed git on a windows machine at my disposal.

    1.      I've branched off of develop and created develop-squash, a branch I could work on
     
    2.      I've cloned the same repository in a separate location - I like to have an option to manually check different commits in the other repository, while I'm working on a rebase in my initial repo
    3.      I've set the default git text editor to Notepad++:
     
    git config --global core.editor "'C:/Program Files (x86)/Notepad++/notepad++.exe' -multiInst -notabbar -nosession -noPlugin"

    4.      I've launched interactive rebase to change the whole history of the branch:

    git rebase -i --root

    What it will do is it will open the interactive rebase file in Notepad++. We will get a list of all commits in the branch. --root makes it list all commits from the very first one. We can set a specific hash instead of root, if we want to change the commit history from specific commit and not the first one. The list will not contain merge commits from other branches.

    Each commit in the list is set to "pick" which means the commit will be picked in the rebase operation. If we want to squash the given commit we need to change to "squash" or "s" for shortcut. (we can also drop specific commits or reorder them as we see fit).

    5.      After we've finished preparing the rebase, we can close the notepad document. Git will see it and execute the rebase, which will start going one by one. Each time there's an issue git will stop the rebase and allow us to act.
     
    6.      Whenever the rebase is stopped we get a chance at seeing what exactly went wrong and solve it. What usually happens is git tries to flatten the history behind the sub-branches of the branch we're squashing. In this case it will require you to resolve all conflicts. After you've resolved the conflicts with one of many diff tools, you can continue the rebase.

    7.      Continue the rebase by executing:

    git rebase --continue

    Or abort it:

    git rebase --abort

    8.      In my case quite often, to get sure I'm merging correctly I was checking a later point in the commit history, in the second repository I've cloned earlier.

    9.      After the rebase finished I've switched the names of the branches, created develop-legacy out of the initial develop, and  develop branch out of the develop-squash branch like this:

    git branch -m develop develop-legacy
    git branch -m develop-squash develop

    10.   Sort out master branch and HEAD:

    I've deleted the remote develop branch and pushed the new one in. Then I've overwritten the remote master branch with my develop as well:

    git push -f origin develop:master

    The above will also set the HEAD index onto this new master, which will leave our branch structure how it should be.

    11.   Cleanup

    At this moment we still heave our develop-legacy branch, which is the old branch with all our old commits. If we remove it we can pretty much remove all the associated tags and sub branches. The reason being - we won't need this history anymore, since we've already rewritten it.

    If we'll only delete the develop-legacy branch - our old commits will still be hanging, being kept by either old tags or sub branches. Once there are no branches and tags associated with it, git garbage collector should remove the orphaned commits (we can also force run it as well).


All in all, there are most probably better ways of doing it. This was something I've cobbled together. I'd be happy to learn about other options which I'm sure git has in stock.

Wednesday 2 August 2017

Adding tiled watermark text to documents

        Recently I really wanted to start adding watermark texts to different documents with my personal information in them. These are usually related to the hiring process. There are different documents, which we frequently send out whenever we are getting x-rayed by this or that company, which is trying to hire us.

A watermark can be helpful in ensuring that the documents we send, if they ever were to appear somewhere, somehow, are going to be easily traceable back to the source. Obviously it is possible to remove a watermark especially with a shabby quality photo-copy. But at least it adds this 1 extra step which perhaps some people will not be willing to go through.

A watermark text itself could contain the information to whom this document is being sent to, and who has created it.

I wanted to be able to scan any document I want on my home scanner, have the document in a JPG format and add a sliding, tiled watermark text of my choosing. I also wanted to be able to script this operation for multiple files.

Obstacles:
- all printers I had access to printed in PDF
- there's no free, simple way of converting PDF to JPG without sending your data to some sketchy website
- adding watermark automatically to multiple documents (otherwise the easiest way would be to use a free tool such as GIMP to add such watermark text manually)

I've done a bit of digging and the simplest solution for me was as follows and requires installation of 2 programs:

- ImageMagicks (a console based image processing app)
- GhostScript (a pdf renderer)

ImageMagicks has it's own pdf to jpg conversion command but it relies on having GhostScript installed underneath - you can use GhostScript solely for that for better performance for batch jobs. I've used the ImageMagicks version for ease of use.

Once all is installed you can use the following command to convert pdf to jpg, given there's a Watermarking directory in your ImageMagics folder:

convert 
-density 150 
-trim Watermarking\test.pdf 
-quality 100 
-flatten 
-sharpen 0x1.0 Watermarking\test.jpg

You can customize the quality using different parameter values, for more info check: https://www.imagemagick.org/script/convert.php

Next we can add a watermark to the resulting image:

convert 
-font Arial 
-pointsize 40 
-size 430x270 xc:none 
-fill #80808080 
-gravity NorthWest 
-draw "rotate 15 text 5,0 'Mr XYZ Company ABC'" miff:- | composite 
-tile - Watermarking\test.jpg Watermarking\watermarked_test.jpg

You can read more oh the parameters used above under the following url: http://www.imagemagick.org/Usage/annotating/#wmark_text

Example pre-watermark:




Example post-watermark:


With the parameters in the command line you can among others set the density of the text it's size, size of the canvas the text is first created in, pivot the text and set the color of the font.

Once you have these commands it's really easy to parameterise them and use powershell, batch or other scripting tool to run it for all pdf's/jpg's in the given folder.

Troubleshooting:
- When on Windows you might want to pass the full path to the Convert.exe, otherwise Windows may mistake the call for another convert.exe which is a system executable related to the filesystem.

Wednesday 5 July 2017

Minimal unit test setup Visual Studio 2017, Karma, Jasmine, Angular2, Typescript

Coming from the .net world setting up a first unit test for my angular side project was a very painful experience. Due to the sheer amount of frameworks and configuration I had to put a lot of time into it and spend a lot of time googling. The number of frameworks doesn't help and it's difficult to find exactly the same combination that you've chosen for your project. I've decided to use Karma because it was used in one of the first Visual Studio + Karma + Typescript tutorial I've found (later I've learnt from my close friend that it wasn't the best of choices).

I've decided to use the Visual Studio 2017 + ReSharper, first because I love these tools, second because I had my Azure WebJob and my WebService projects in the same solution as my angular2 based website.

It took a lot of time for me to go through it (after quite a bit of time I was able to run my tests from ReSharper but my goal was to make it run using either gulp tasks or NPM scripts tasks. After many retrials and tests I've narrowed down my Karma config (karma.conf.js) to the one below:


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
module.exports = function (config) {
    config.set({
        frameworks: ['jasmine', 'karma-typescript'],

        files: [
            'tests/**/*.js',
            'tests/**/*.ts'
        ],

        karmaTypescriptConfig: {
            compilerOptions: {
                module: 'commonjs',
                target: 'es2015'
            },
            tsconfig: 'tsconfig.json'
        },

        preprocessors: {
            '**/*.ts': ['karma-typescript']
        },

        reporters: ['progress', 'htmlDetailed'],

        plugins: [
            'karma-phantomjs-launcher',
            'karma-jasmine',
            'karma-typescript',
            'karma-html-detailed-reporter'
        ],

        htmlDetailed: {
            splitResults: false
        },

        port: 9876,
        colors: true,
        logLevel: config.LOG_INFO,
        autoWatch: true,
        browsers: ['PhantomJS']
    });
}

Most of the errors I was receiving stemmed from the miss-configuration, and the fact that I was missing some dependencies. Karma is very configurable and most probably using something more simple like Tape would have set me off much faster.

Others steps I've taken to make it run/troubleshoot:
- installed Task Runner Explorer extension, which gives a nice gui for the tasks
- I've checked running the tests both in PhantomJS and headless Chrome
- when I was looking for errors I've switched logging level to Verbose, being able to see the logs being displayed in the Task's Runner output was something, which has greatly helped me pinpoint different issues
- using karma-typescript pre-processor for the ts files along with its config helped me with a lot of errors caused by unexpected token errors
- many of the errors were related to the incorrect module system or ecma script version
- first unit test (or spec file) which I've successfully launched was also my 'proof of concept' and was really simple:


1
2
3
describe('1st tests', () => {
    it('true is true', () => expect(true).toBe(true));
});




Saturday 20 May 2017

Set up page reload/type script compilation on save in Visual Studio 2015 with typescript

      The goal is to be able to transfer the changes done in TS code during debugging onto the running website. This is possible in Visual Studio using the Browser Link functionality. Whenever we debug and we've got Browser Link Dashboard open - we should be able to see our browser of choice during the debug session listed under specific projects.

This can actually allow us to have the same project being run in multiple browsers and refreshed in all at the same time.



Straight away we learn about the particular items which must be fulfilled to make it work:

1.       Static HTML files linking should be enabled by adding the following to the Web.config:
 
<system.webServer>
<handlers>
<add name="Browser Link for HTML" path="*.html" verb="*" type="System.Web.StaticFileHandler, System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" resourceType="File" preCondition="integratedMode" />
</handlers>

2.       Debugging must be enabled in the Web.config file (this will typically be added to your web.config by default)

<system.web> <compilation debug="true" targetFramework="4.5.2"/>
 
3.       IIS server must run on .NET 4.0 or later.
 
4.       Compile on save has to be enabled in project options or particular setting in the csproj file has to be set (this can actually be changed mid-debugging):

Project properties:

Project file:
<TypeScriptCompileOnSaveEnabled>True</TypeScriptCompileOnSaveEnabled>

5.       Server side caching (files cached by IIS)

Add the following line to Web.config to instruct IIS not to cache files:
You need to set
<caching>
   <outputCache enableOutputCache="false" />
</caching>
or if its IIS 7+ (Which IIS Express will be)
<system.webServer>
    <caching enabled="false" />
</system.webServer>

6.       Client side caching (in-browser caching)

Chrome:
An option is to "Disable cache" in Dev Tools -> Networking
 

This will however force you to have Dev Tools open while debugging. There's a similar functionality for FireFox where caching can be only disabled for as long as long the developer tools are opened. To simplify working with such setup it's good to open Dev Tools in a separate window - even if you're not using it in this specific case you can keep it opened in the task bar.

7.       There are numerous examples of running Chrome in a non-caching mode, neither worked for me (chrome opened from command line with incognito mode, disabling application cache with an argument or outright setting the cache limit to a very small number. In my case it only worked for the first index file opened but the template html files used in components were not getting refreshed.

Environment used:
Visual Studio 2015 Update 3
Resharper 2016.2
Project created using Angular2WebTemplate

Someone's workaround: