Historically I've always been a database guy, I never used Entity Framework until I started working with ASP.NET Core and implementing it for my cloudscribe projects. Recently I began getting cloudscribe ready for ASP.NET Core 2.0 and Entity Framework Core 2.0, and in the process I learned that some of my "database guy" assumptions about using Entity Framework were wrong. I also learned about some things that need to be done differently when using Entity Framework Core 2.0 with ASP.NET Core 2.0 vs how things were done in 1.x. In this post I will share what I have learned in hopes it may help others.

Avoid Using Database Default Values

As a "database guy", it seemed natural to me to want to specify default values in the database, but when you do that with Entity Framework there are some important nuances that in general make that a bad idea. In most cases you should instead use default values on the entity like this:

public bool AllowNewRegistration { get; set; } = true;

I was doing that, but as a "database guy", my instinct was to also make that the default in the database by specifying a default value in OnModelCreating of my DbContext like this:

entity.Property(p => p.AllowNewRegistration)
    .IsRequired()
    .HasColumnType("bit")
    .HasDefaultValue(true);

However, after updating to Entity Framework Core 2.0-preview2, I began seeing warnings being logged and warnings when generating a new migration like this:

The 'bool' property 'AllowNewRegistration' on entity type 'SiteSettings' is configured with a database-generated default. This default will always be used when the property has the value 'false', since this is the CLR default for the 'bool' type. Consider using the nullable 'bool?' type instead so that the default will only be used when the property value is 'null'.

This kind of scared me at first because I thought it was a change in behavior from Entity Framework Core 1.x to 2.x, but in fact it turned out the behavior was the same in 1.x, and it is only the warnings that are new in 2.0. It scared me because it sounded like any time my entity value was false, it would use the database default of true, and that is what it does but only on inserts, on updates it respects whatever the property is on the entity. There is a bit of nuance in understanding this warning. For example if you have a bool property on the entity and you use a default value of false in the database you would still get the warning but there isn't really a problem because if the entity value is true at insert time you get the expected result, it will be true in the database after insert. The trouble comes when you specify a default value of true in the database, since the CLR type bool has a default value of false, if the entity has false for that property at insert time it will get the database default of true rather than the value that was set on the entity. This may or may not be a real problem in the application depending on whether that property is surfaced in the UI for creating the entity. If the property is only editable for updating the entity and not for creating the entity, then you still get the expected results. But if it is a property that you surface in the UI in order to allow it to be specified at creation time, then you will get the unexpected result if it is set to false, then the database default will be applied.

So, a rule of thumb should be do not specify default values in the database. As an aside I would also say that in general you do not need to specify the ColumnType as I did above, though it causes no problems or warnings, you can generally trust the Entity Framework provider to decide the right data type for the database.

Exceptions to this rule do exist

Ok, so should we never specify a default value? Never say never. Given what we now understand about how default values are used on inserts, we need to consider how to handle it when we add a new bool property to the entity with a default value of true on the entity itself, what will happen to existing rows when the migration is applied and no default value is specified in the db. It turns out the existing rows will not get the entity default of true but will instead get the CLR default of false. This is not what we want.

In this case the solution is to go ahead and specify a default value of true in the database, generate a new migration, then remove the default value and generate another migration. This way the existing rows will get true from the first migration which is what we want, then the second migration will remove the database default value so that we get expected results on new inserts.

Do Specify MaxLength Where Appropriate

This I learned when I first began using Entity Framework Core, not as part of updating from 1.x to 2.x, but it is worth mentioning for anyone new to using Entity Framework. When you have string properties on your entity, if you don't specify a MaxLength then NVarChar(max) will be used in SqlServer or text in other database platforms, so unless you need that much space for the value you should always specify a MaxLength. Note that nvarchar(max) won't use more space than needed but it is still a good idea to limit the size to what you really need, and probably even more important when using other providers than SqlServer.

Other Changes From Entity Framework Core 1.x to 2.x

There are a few other things I ran into when updating cloudscribe to Entity Framework Core 2. These are things that may or may not impact upgrading your own application depending on whether you did any of the same things I did when using 1.x.

I was using some of the provider specific annotations like this:

 modelBuilder.Entity<SiteSettings>(entity =>
 {
    entity.ForSqlServerToTable("cs_Site");

    entity.HasKey(p => p.Id);

    entity.Property(p => p.Id)
       .ForSqlServerHasColumnType("uniqueidentifier")
       .ForSqlServerHasDefaultValueSql("newid()");

    ...
}

Those extensions went away in 2.0 so now we just use the non-provider-specific ones like this:

modelBuilder.Entity<SiteSettings>(entity =>
 {
    entity.ToTable("cs_Site");

    entity.HasKey(p => p.Id);

    entity.Property(p => p.Id)
       .HasColumnType("uniqueidentifier")
       .HasDefaultValueSql("newid()");

    ...
}

Hopefully you were not using the provider specific ones and won't run into that problem yourself. For 2.0-preview2 I had to manually edit my existing migration code to make it more consistent with how they would have been generated using the more generic annotations. I have heard that in 2.0 RTM they will handle some of that automatically but you would still have to make changes for the methods which no longer exist.

I was also wiring up the dependency injection like this in 1.x, though I am not sure it was required to do it this way so it may not impact you. I based my code on examples I found which may date back to the beta  or RC days, but in any case this was working for me in 1.x and caused a problem in 2.x:

services.AddEntityFrameworkSqlServer()
    .AddDbContext<CoreDbContext>((serviceProvider, options) =>
        options.UseSqlServer(connectionString)
               .UseInternalServiceProvider(serviceProvider)
               );

I ran into a problem where an error was thrown when trying to execute the migrations, I was getting the error:

AggregateException: One or more errors occurred. (Cannot consume scoped service 'Microsoft.EntityFrameworkCore.Infrastructure.ModelCustomizerDependencies' from singleton 'Microsoft.EntityFrameworkCore.Infrastructure.IModelCustomizer'.)

There were a few things I changed which made this error go away, first I changed the above code like this:

services.AddEntityFrameworkSqlServer()
    .AddDbContext<CoreDbContext>(options =>
    {
        options.UseSqlServer(connectionString);
    });

getting rid of the UseInternalServiceProvider. Possibly that solved it but I also did another thing which could have been a factor in the fix. Based on observations in other projects using 2.0, I noticed that DbContext should apparently now have a protected constructor that has no parameters, whereas mine only had the constructor that takes a DbContextOptions parameter, so I added an additional constructor like this:

protected CoreDbContext() { }

whereas previously I only had this one:

public CoreDbContext(DbContextOptions<CoreDbContext> options) : base(options) {}

After doing those things that error went away and things were working fine.

However from following some github issues, I learned another thing that is recommended for 2.0 is not to trigger migrations/seeding from the Configure method of Startup.cs which is how it was generally done in 1.x. I had code like this in my Configure Method:

CoreEFStartup.InitializeDatabaseAsync(app.ApplicationServices).Wait();
LoggingEFStartup.InitializeDatabaseAsync(app.ApplicationServices).Wait();
SimpleContentEFStartup.InitializeDatabaseAsync(app.ApplicationServices).Wait();

The above code worked still in 2.x for my scenario, but the new guidance is to move that stuff into Program.cs, so I removed those lines and now my 2.0 Program.cs is like this:

public class Program
{
	public static void Main(string[] args)
	{
		var host = BuildWebHost(args);
		
		using (var scope = host.Services.CreateScope())
		{
			var services = scope.ServiceProvider;

			try
			{
				EnsureDataStorageIsReady(services);

			}
			catch (Exception ex)
			{
				var logger = services.GetRequiredService<ILogger<Program>>();
				logger.LogError(ex, "An error occurred while migrating the database.");
			}
		}

		host.Run();
	}

	public static IWebHost BuildWebHost(string[] args) =>
		WebHost.CreateDefaultBuilder(args)
			.UseStartup<Startup>()
			.Build();

	private static void EnsureDataStorageIsReady(IServiceProvider services)
	{
		CoreEFStartup.InitializeDatabaseAsync(services).Wait();
		SimpleContentEFStartup.InitializeDatabaseAsync(services).Wait();
		LoggingEFStartup.InitializeDatabaseAsync(services).Wait();
	}

}

In summary, a few things have changed in Entity Framework Core 2.x that may or may not impact your applications, but I thought I should make note of the issues I encountered in case some of you encounter the same issues.

Last year was my first foray into making my own pickles. I had some great success and I also made some mistakes that I learned from. The good batches of pickles I made were so good that when I shared them with friends at a party I got a message the next day asking if it were possible to get some more, would I consider selling some? But I also messed up some batches and had to throw them away. Here I will provide the steps and variations I use, as well as some cautions so you don't make the same mistakes I made. Even though there are lots of details to think about, making pickles is very easy and doesn't take much time at all if you have the jars and spices on hand. You can use the same recipe and guidance here with carrots as well, just make them the same way substituting carrots for cucumbers, they come out awesome!

Start with the right cucumber varieties

There are special cucumbers for pickling, don't just use regular cucumbers. Use Kirby, or Sumter, or others that are intended for pickling. You could buy them for making an occasional batch, but you should grow them yourself, they are easy to grow. Also plant some dill weed even before the cucumber planting if possible so it will be ready around the time when the cucumbers start rolling in. Also plant some rosemary in your garden and get it established, it will grow into perennial bushes and shrubs in moderate climates and then you can snip sprigs from it which make very interesting and unique pickles unlike anything you will find in the stores. The rosemary pickles I made last year were the ones people really liked the most. Dill weed is not perennial and has to be planted every year. Last year I did not have any dill planted but I did make some dill pickles using sprigs of dill purchased from the local grocery store. This year I have dill growing and ready. Cucumbers grow very fast and once they start rolling in you will be making pickles every other day or so for a while. You can refrigerate them for a few days if needed but don't wait too long, they are better the fresher they are when you make them.

About the jars

Canning jars are easy to come by, get them at a local store or order from amazon. They need to be sterile. You can boil the jars but that is a pain, if you have a good dishwasher run them through twice set on heavy load and high temperature wash. This is especially important if the jars have been used before. One of the mistakes I made last year was re-using the jars after the first batches and I did not get them sterile enough so some went bad and had to be thrown out. I boiled them after that but it was getting towards the end of the season at that point. This year I'm using the dishwasher high temperature wash and they seem to come out sterile, we shall see. I use one quart and two quart jars, I also use some glass fermenting weights or pickle pebbles that I put on top to help keep them below the surface of the water. You can get those also at amazon, they are kind of expensive considering what they are. You can also get by without them if you pack the cucumbers well enough that they aren't trying to float to the top. You can re-use the ring parts of the lids but never re-use the flat parts. You can buy extra lids fairly cheap on amazon.

Make the brine

Use 2 - 3 tablespoons of sea salt per quart of filtered water. I put this in a large sterile bowl and stir it to dissolve the salt. It has to be sea salt, don't use iodized salt. One of the mistakes I made last year was trying to reduce the salt in some batches but that is a bad idea. The salt is very important in creating the right conditions to allow the good bacteria to thrive and to prevent the unwanted kind. When I used too little I ended up having to throw out some bad jars and hang my head in disappointment as my pickle supply dwindled.

The basic recipe

  1. on a sterile cutting board cut the cucumbers into wedges and put them in the jars.
  2. add spices
  3. fill with brine
  4. optionally top with fermenting weights or pickle pebbles
  5. seal the jars tightly with new lids
  6. store the jars at room temperature in some kind of tray or container, as gasses build up sometimes the jars may ooze out some liquid. If that happens what I do is carefully and slowly open the lid to gradually let off pressure then reseal. Best not to store them in direct light

I store them for 7 days at room temperature and that is usually enough. I put them in the fridge at least one night before eating them. They will continue to ferment in the fridge but much slower than at room temperature. You can keep them at room temperature longer or much longer if needed or if you don't have space in the fridge.

Spices - get creative

The fun part of making pickles is using different spices. Try different things, be creative, and give them your own personal touch. Not every experiment will turn out to be great but many will, and it is fun to experiment. Dill is the traditional spice of choice for good reason, but other things like Rosemary make very interesting flavored pickles unlike anything you will find in stores. In fact finding real fermented pickles in stores is not likely, they are typically made with vinegar and may be pasteurized. There are probably regulations that make it difficult for the food industry to provide real fermented craft pickles. The ones you make yourself will be better than any you find in the stores.

The spices I have been playing with are:

  • Sprigs of fresh dill, growing my own this year
  • Sprigs of fresh rosemary, I have lots of this growing in my yard, also good for aroma therapy, just rub your fingers through it and smell your hands
  • Chopped garlic - I chop it but I don't mince it really small, I use kind of a lot because I like garlicky pickles
  • Peppercorns - I put a tablespoon or more in the large jars and a tablespoon or less in the small jars
  • mustard seed - a teaspoon or tablespoon per jar
  • corriander seed - a teaspoon or tablespoon per jar
  • cumin seed - this I only add in some jars, I like it but not as much as some of the other combinations

I have made batches with all of the above together but you don't need all of that every time. I always use the garlic, always use dill or rosemary or both, always use peppercorns, often use coriander, occasionally use cumin. I use quite a bit, you can probably get by with less, but try different proportions. I buy the peppercorns, and other seed spices in bulk from amazon, so much less expensive than buying little jars at the grocery store, and having more on hand allows me to add them more generously than I might otherwise.

It may sound like a lot of steps, but it really doesn't take long. As the cucumbers start rolling in fast I make batches every few days and it takes me only about 15 - 20 minutes including the time to pick them and rinse them. Fermented foods are very good for you and very delicious. I can't wait till my first batch of the year is ready to eat, which will be next week.

There are lots of good little books about fermenting if you want to learn more.

Enjoy, and don't forget to share them with your friends!

Most of the mojoPortal community members know me as the founder and primary developer of mojoPortal and know that I've stopped actively developing mojoPortal. As is often the case with developers I'm attracted to new shiny technology and want to work on other things.  But, mojoPortal still delivers a lot of business value and I know quite a few folks who have been using mojoPortal for years who are quite happy with it and want to keep using it. 

It turns out that one of our top community members, Joe Davis of i7MEDIA, would really like to keep the project going and we have come to a business agreement that will let his organization take over both as the maintainer of the open source project and as the official vendor for the mojoPortal add on products. In other words mojoPortal is back under active development with a new team behind it that will continue to support the project and the products. 

mojoPortal was named after my dog mojo, the best dog I've ever had.


I could not be more thrilled by this! It gives me great satisfaction to know that I have created something that so many people have found useful and that it will continue to be used and supported by a top community member and his organization. 

Joe Davis and i7MEDIA have always been the most qualified consultants for mojoPortal other than myself, and I've always had great confidence in referring customers and projects to them over the years.  Joe Davis has been a good friend and has been my go to guy when I needed to refer a customer or project that I did not have the capacity for. Whether you need design help, hosting, or custom feature development, i7MEDIA is the team you want with years of experience. mojoPortal was my baby for over a decade, and I am very glad to know my baby is in good hands! Long live mojoPortal!

I am honored, humbled, and elated to report that for the first time in my career I’ve been recognized with a Microsoft MVP Award!

According to the email I received there are only a few thousand MVPs worldwide so it is a pretty big deal for me to be part of this group. While I haven’t been told any specifics, my understanding is that factors that qualify people for this award include activity within the technical community such as contributions to open source projects, answering questions in technology forums, technical writing/blogging, and public speaking at developer conferences are the kind of things one can do to be recognized for this award.  I’ve been an active open source developer since 2004 when I founded the mojoPortal project, and over the years I’ve answered thousands of questions in the mojoPortal forums, but that project is built on older web technology and is not as popular these days as it once was. More recently I’ve founded a new set of projects collectively branded as “cloudscribe” and built on the latest greatest ASP.NET Core stack. I’ve spent the last year immersing myself in this new technology stack while it was in preview and I’ve accumulated about 4500 reputation points on stackoverflow in the past year mostly answering questions related to ASP.NET Core. I’m really excited about ASP.NET Core because it really makes it possible and natural to use design patterns that were difficult or impossible to use in the old WebForms framework and it gives us a truly modern web framework that embraces the web.  The old WebForms framework was really trying to make web development more like desktop application development and it did so by hiding the web in such a way that one could build web applications and sites without really understanding the underlying web technology. Looking back I guess that was a good thing for the early days of the web, but after building web applications for many years you end up learning the web technology anyway and you begin to realize that the old framework was making it harder for you to work directly with the nature of the web because of all the abstractions that have been layered on top if it in the framework. By contrast the new framework embraces the nature of the web and requires you to understand it directly and work with it directly and ultimately that is a good thing.

 

One of the great things about doing open source development and sharing your work is that online communities are global and you make friends worldwide. I have been truly blessed to get to know some really nice people in far flung corners of the world that I would never have met otherwise. In fact I would like to take this moment to offer sincere thanks and gratitude to my friend Guruprasad Balaji, a long time mojoPortal community member from Chennai India who nominated me for this MVP Award. Who knows maybe someday I will get an opportunity to travel to India and other places in the world and get to meet some of my online friends in person.

One of the benefits of the MVP Award that I’m most excited about is that once a year in November, Microsoft holds an MVP Summit and MVPs from all over the world fly in to Microsoft Headquarters in Redmond Washington!  For a few days I will get to attend some insider technology sessions and meet and network with other MVPs! I will get to meet in person with some really smart people I’ve admired online for many years!

Now that I’m a part of this MVP community I intend to do my best to be included for many years to come. To earn my keep, I will of course keep doing open source development and helping people on stackoverflow, but I will also try to be more active in public speaking.  On October 25, 2016 I’m scheduled to give a presentation on ASP.NET Core MVC framework at the Enterprise Developers Guild in Charlotte, NC. I’m very excited about this presentation and I plan to use my latest open source projects to illustrate the concepts I will be presenting with real working code examples. I hope you can attend the event, but even if not, I hope you will take a look at my new open source projects related to ASP.NET Core:

  • cloudscribe Core – a multi-tenant web application foundation providing management of sites (tenants), users, roles, and role membership. Pretty much every web projects needs that kind of thing so this is a foundation to build other things on top of so you don’t have to re-implement that stuff for every new project.
  • cloudscribe Simple Content – a simple yet flexible blog and content engine that can work with or without a database (actually at the time of this writing it only works without a database using my NoDb storage project, but soon I will implement Entity Framework data storage and possibly MongoDb at some point)
  • NoDb – a “no database” file system storage because many projects don’t need an enterprise database. Think of NoDb like a file system document database that stores objects in the file system serialized as json (or in some cases xml). Great for brochure web sites, and great for prototyping new applications.
  • cloudscribe Navigation – MVC components for navigation menus and breadcrumbs, I use this in cloudscribe Core and in cloudscribe SimpleContent but  it is useful for any MVC web project
  • cloudscribe Pagination – MVC TagHelpers for rendering pagination links for multi-page content
  • cloudscribe Syndication – a re-useable RSS feed generator for ASP.NET Core, I use this for the RSS feed in cloudscribe Simple Content but it could be used in other projects easily
  • cloudscribe MetaWeblog – a re-useable implementation of the metaweblog api for ASP.NET Core, I use this in cloudscribe Simple Content to make it possible to author content using Open Live Writer
  • cloudscribe Logging - an implementation of ILogger and ILoggerProvider that logs to the database using a pluggable model supporting multiple data platforms. Also provides an MVC controller for viewing and managing the log data
  • cloudscribe SimpleAuth – simple no-database required user authentication for web applications where only a few people need to be able to login

I’m a little late in blogging about my MVP Award, I found out in early July, but I really wanted to be able to blog about it using my new Simple Content project. Previously this site was running on a really old version of mojoPortal, but I just rebuilt this site using my new Simple Content project for the blog. It took a while to get everything ready because this is the first real world project I’ve done using Simple Content. I fixed a few bugs and implemented a few little improvements as part of getting this site completed, but there is still more to work to complete all the features I’ve planned. This site is using NoDb for storage which gives me an opportunity to prove the viability of building sites without a database. I’ve never really considered myself a web designer, I am primarily a web developer, but using bootstrap makes it easy to put together a professional looking, mobile responsive site. I’m no visual artist but I’m a decent mechanic when it comes to CSS. This is kind of a quick first draft design, I may yet change the color scheme and get a little more creative with it, but I think it is a major improvement over the outdated design of my old web site.

When building cross platform web applications on ASP.NET Core, one may think that since the framework itself is cross platform then your application will just work on any supported platform without having to do anything special.  That is mostly true but there are still some things that one has to consider during development to avoid some problems that can happen due to platform differences. For example file systems on linux are case sensitive and therefore so are urls, so you generally want to standardize on lower case urls. Also if building strings with file system paths, you need to always use Path.Combine or concatenate with Path.DirectorySeparatorChar rather than a back slash since that is Windows specific and forward slash is used on linux/mac.  Additionally Time Zone Ids are different on Windows than on linux/mac which uses IANA time zone ids, and if you try to use the TimeZoneInfo class and you pass in an invalid time zone id for the current platform it results in exceptions. If you are storing time zone ids for users or sites in a database and you migrate the site to a different platform, all of the stored time zone ids could be invalid for the new platform. I ran into this problem in developing my cloudscribe project, because it supports the concept of a site level time zone that is used by default, but an authenticated user may choose their own time zone. The approach I use is to always store dates in the database as UTC and then they can be adjusted to the given time zone for display as needed.

To solve the problem, I turned to the NodaTime project, which provides a comprehensive datetime library that is arguably superior to the DateTime and TimeZoneInfo classes provided in the .NET framework. For applications and features that are focused around dates, such as calendaring and scheduling, I think I would go all in on using the NodaTime classes instead of the built in framework classes, but for more common scenarios it makes sense to me to stick with the standard DateTime class for entity properties, since that will typically map directly to a corresponding database datetime on various database platforms without any friction. So for my purposes, what I wanted was to use NodaTime as a means to standardize on IANA time zone ids, and as a tool to convert back and forth from UTC datetime to various time zones as needed in a way that will work the same on any platform. NodaTime has a built in list of the IANA time zones, and it has the needed functionality for doing the conversions to and from UTC. While I am continuing to use the standard DateTime class for all my date properties on entities, I am no longer using the standard TimeZoneInfo class that is built into the .NET framework for converting dates back and forth, but instead using a little TimeZoneHelper class that I implemented for encapsulating the conversions so that none of my other code needs to know about NodaTime. My TimeZoneHelper also exposes the list of TimeZoneIds provided by NodaTime so I can use it to populate a dropdown list for time zone selection, and I simply store the IANA time zone id in the database for my sites and users.

This TimeZoneHelper class is not very large or complex and I think it would be useful for anyone else who wants a consistent way to handle TimeZones in a platform neutral way. Feel free to use this code in your own projects!

using Microsoft.Extensions.Logging;
using NodaTime;
using NodaTime.TimeZones;
using System;
using System.Collections.Generic;

namespace cloudscribe.Web.Common
{
    public class TimeZoneHelper : ITimeZoneHelper
    {
        public TimeZoneHelper(
            IDateTimeZoneProvider timeZoneProvider,
            ILogger<TimeZoneHelper> logger = null
            )
        {
            tzSource = timeZoneProvider;
            log = logger;
        }

        private IDateTimeZoneProvider tzSource;
        private ILogger log;

        public DateTime ConvertToLocalTime(DateTime utcDateTime, string timeZoneId)
        {
            DateTime dUtc;
            switch(utcDateTime.Kind)
            {
                case DateTimeKind.Utc:
                dUtc = utcDateTime;
                    break;
                case DateTimeKind.Local:
                    dUtc = utcDateTime.ToUniversalTime();
                    break;
                default: //DateTimeKind.Unspecified
                    dUtc = DateTime.SpecifyKind(utcDateTime, DateTimeKind.Utc);
                    break;
            }

            var timeZone = tzSource.GetZoneOrNull(timeZoneId);
            if (timeZone == null)
            {
                if(log != null)
                {
                    log.LogWarning("failed to find timezone for " + timeZoneId);
                }
               
                return utcDateTime;
            }

            var instant = Instant.FromDateTimeUtc(dUtc);
            var zoned = new ZonedDateTime(instant, timeZone);
            return new DateTime(
                zoned.Year,
                zoned.Month,
                zoned.Day,
                zoned.Hour,
                zoned.Minute,
                zoned.Second,
                zoned.Millisecond,
                DateTimeKind.Unspecified);
        }

        public DateTime ConvertToUtc(
            DateTime localDateTime,
            string timeZoneId,
            ZoneLocalMappingResolver resolver = null
            )
        {
            if (localDateTime.Kind == DateTimeKind.Utc) return localDateTime;

            if (resolver == null) resolver = Resolvers.LenientResolver;
            var timeZone = tzSource.GetZoneOrNull(timeZoneId);
            if (timeZone == null)
            {
                if (log != null)
                {
                    log.LogWarning("failed to find timezone for " + timeZoneId);
                }
                return localDateTime;
            }

            var local = LocalDateTime.FromDateTime(localDateTime);
            var zoned = timeZone.ResolveLocal(local, resolver);
            return zoned.ToDateTimeUtc();
        }

        public IReadOnlyCollection<string> GetTimeZoneList()
        {
            return tzSource.Ids;
        }
    }
}

You'll notice that this helper has some constructor dependencies, you can wire those up to be injected in the ConfigureServices method of your Startup class like this:

services.TryAddSingleton<IDateTimeZoneProvider>(new DateTimeZoneCache(TzdbDateTimeZoneSource.Default));
services.TryAddScoped<ITimeZoneHelper, TimeZoneHelper>();

Note that I made an interface ITimeZoneHelper, that this class implements, which would allow me to plugin different logic if I ever need or want to but you could simply remove the interface declaration if you use this in your own project. I hope you find this code useful!