Huge thanks to Dave Rael for having me on the Developer on Fire Podcast! I am honored to have been able to talk with Dave on his show. The Developer on Fire Podcast is really unique because it focuses on the people more than on the technology. There is some tech talk of course but he really tries to get at the stories, the personality, what makes you tick, who you are, what motivates you and how you engage with your work. The show has been around for a long time and he has had some real luminaries among his guests including many of my personal tech heroes. I highly recommend adding this podcast to your listening routine if you haven't already.

developer on fire logo

Historically I've always been a database guy, I never used Entity Framework until I started working with ASP.NET Core and implementing it for my cloudscribe projects. Recently I began getting cloudscribe ready for ASP.NET Core 2.0 and Entity Framework Core 2.0, and in the process I learned that some of my "database guy" assumptions about using Entity Framework were wrong. I also learned about some things that need to be done differently when using Entity Framework Core 2.0 with ASP.NET Core 2.0 vs how things were done in 1.x. In this post I will share what I have learned in hopes it may help others.

Avoid Using Database Default Values

As a "database guy", it seemed natural to me to want to specify default values in the database, but when you do that with Entity Framework there are some important nuances that in general make that a bad idea. In most cases you should instead use default values on the entity like this:

public bool AllowNewRegistration { get; set; } = true;

I was doing that, but as a "database guy", my instinct was to also make that the default in the database by specifying a default value in OnModelCreating of my DbContext like this:

entity.Property(p => p.AllowNewRegistration)
    .IsRequired()
    .HasColumnType("bit")
    .HasDefaultValue(true);

However, after updating to Entity Framework Core 2.0-preview2, I began seeing warnings being logged and warnings when generating a new migration like this:

The 'bool' property 'AllowNewRegistration' on entity type 'SiteSettings' is configured with a database-generated default. This default will always be used when the property has the value 'false', since this is the CLR default for the 'bool' type. Consider using the nullable 'bool?' type instead so that the default will only be used when the property value is 'null'.

This kind of scared me at first because I thought it was a change in behavior from Entity Framework Core 1.x to 2.x, but in fact it turned out the behavior was the same in 1.x, and it is only the warnings that are new in 2.0. It scared me because it sounded like any time my entity value was false, it would use the database default of true, and that is what it does but only on inserts, on updates it respects whatever the property is on the entity. There is a bit of nuance in understanding this warning. For example if you have a bool property on the entity and you use a default value of false in the database you would still get the warning but there isn't really a problem because if the entity value is true at insert time you get the expected result, it will be true in the database after insert. The trouble comes when you specify a default value of true in the database, since the CLR type bool has a default value of false, if the entity has false for that property at insert time it will get the database default of true rather than the value that was set on the entity. This may or may not be a real problem in the application depending on whether that property is surfaced in the UI for creating the entity. If the property is only editable for updating the entity and not for creating the entity, then you still get the expected results. But if it is a property that you surface in the UI in order to allow it to be specified at creation time, then you will get the unexpected result if it is set to false, then the database default will be applied.

So, a rule of thumb should be do not specify default values in the database. As an aside I would also say that in general you do not need to specify the ColumnType as I did above, though it causes no problems or warnings, you can generally trust the Entity Framework provider to decide the right data type for the database.

Exceptions to this rule do exist

Ok, so should we never specify a default value? Never say never. Given what we now understand about how default values are used on inserts, we need to consider how to handle it when we add a new bool property to the entity with a default value of true on the entity itself, what will happen to existing rows when the migration is applied and no default value is specified in the db. It turns out the existing rows will not get the entity default of true but will instead get the CLR default of false. This is not what we want.

In this case the solution is to go ahead and specify a default value of true in the database, generate a new migration, then remove the default value and generate another migration. This way the existing rows will get true from the first migration which is what we want, then the second migration will remove the database default value so that we get expected results on new inserts.

Do Specify MaxLength Where Appropriate

This I learned when I first began using Entity Framework Core, not as part of updating from 1.x to 2.x, but it is worth mentioning for anyone new to using Entity Framework. When you have string properties on your entity, if you don't specify a MaxLength then NVarChar(max) will be used in SqlServer or text in other database platforms, so unless you need that much space for the value you should always specify a MaxLength. Note that nvarchar(max) won't use more space than needed but it is still a good idea to limit the size to what you really need, and probably even more important when using other providers than SqlServer.

Other Changes From Entity Framework Core 1.x to 2.x

There are a few other things I ran into when updating cloudscribe to Entity Framework Core 2. These are things that may or may not impact upgrading your own application depending on whether you did any of the same things I did when using 1.x.

I was using some of the provider specific annotations like this:

 modelBuilder.Entity<SiteSettings>(entity =>
 {
    entity.ForSqlServerToTable("cs_Site");

    entity.HasKey(p => p.Id);

    entity.Property(p => p.Id)
       .ForSqlServerHasColumnType("uniqueidentifier")
       .ForSqlServerHasDefaultValueSql("newid()");

    ...
}

Those extensions went away in 2.0 so now we just use the non-provider-specific ones like this:

modelBuilder.Entity<SiteSettings>(entity =>
 {
    entity.ToTable("cs_Site");

    entity.HasKey(p => p.Id);

    entity.Property(p => p.Id)
       .HasColumnType("uniqueidentifier")
       .HasDefaultValueSql("newid()");

    ...
}

Hopefully you were not using the provider specific ones and won't run into that problem yourself. For 2.0-preview2 I had to manually edit my existing migration code to make it more consistent with how they would have been generated using the more generic annotations. I have heard that in 2.0 RTM they will handle some of that automatically but you would still have to make changes for the methods which no longer exist.

I was also wiring up the dependency injection like this in 1.x, though I am not sure it was required to do it this way so it may not impact you. I based my code on examples I found which may date back to the beta  or RC days, but in any case this was working for me in 1.x and caused a problem in 2.x:

services.AddEntityFrameworkSqlServer()
    .AddDbContext<CoreDbContext>((serviceProvider, options) =>
        options.UseSqlServer(connectionString)
               .UseInternalServiceProvider(serviceProvider)
               );

I ran into a problem where an error was thrown when trying to execute the migrations, I was getting the error:

AggregateException: One or more errors occurred. (Cannot consume scoped service 'Microsoft.EntityFrameworkCore.Infrastructure.ModelCustomizerDependencies' from singleton 'Microsoft.EntityFrameworkCore.Infrastructure.IModelCustomizer'.)

There were a few things I changed which made this error go away, first I changed the above code like this:

services.AddEntityFrameworkSqlServer()
    .AddDbContext<CoreDbContext>(options =>
    {
        options.UseSqlServer(connectionString);
    });

getting rid of the UseInternalServiceProvider. Possibly that solved it but I also did another thing which could have been a factor in the fix. Based on observations in other projects using 2.0, I noticed that DbContext should apparently now have a protected constructor that has no parameters, whereas mine only had the constructor that takes a DbContextOptions parameter, so I added an additional constructor like this:

protected CoreDbContext() { }

whereas previously I only had this one:

public CoreDbContext(DbContextOptions<CoreDbContext> options) : base(options) {}

After doing those things that error went away and things were working fine.

However from following some github issues, I learned another thing that is recommended for 2.0 is not to trigger migrations/seeding from the Configure method of Startup.cs which is how it was generally done in 1.x. I had code like this in my Configure Method:

CoreEFStartup.InitializeDatabaseAsync(app.ApplicationServices).Wait();
LoggingEFStartup.InitializeDatabaseAsync(app.ApplicationServices).Wait();
SimpleContentEFStartup.InitializeDatabaseAsync(app.ApplicationServices).Wait();

The above code worked still in 2.x for my scenario, but the new guidance is to move that stuff into Program.cs, so I removed those lines and now my 2.0 Program.cs is like this:

public class Program
{
	public static void Main(string[] args)
	{
		var host = BuildWebHost(args);
		
		using (var scope = host.Services.CreateScope())
		{
			var services = scope.ServiceProvider;

			try
			{
				EnsureDataStorageIsReady(services);

			}
			catch (Exception ex)
			{
				var logger = services.GetRequiredService<ILogger<Program>>();
				logger.LogError(ex, "An error occurred while migrating the database.");
			}
		}

		host.Run();
	}

	public static IWebHost BuildWebHost(string[] args) =>
		WebHost.CreateDefaultBuilder(args)
			.UseStartup<Startup>()
			.Build();

	private static void EnsureDataStorageIsReady(IServiceProvider services)
	{
		CoreEFStartup.InitializeDatabaseAsync(services).Wait();
		SimpleContentEFStartup.InitializeDatabaseAsync(services).Wait();
		LoggingEFStartup.InitializeDatabaseAsync(services).Wait();
	}

}

In summary, a few things have changed in Entity Framework Core 2.x that may or may not impact your applications, but I thought I should make note of the issues I encountered in case some of you encounter the same issues.

I am honored, humbled, and elated to report that for the first time in my career I’ve been recognized with a Microsoft MVP Award!

According to the email I received there are only a few thousand MVPs worldwide so it is a pretty big deal for me to be part of this group. While I haven’t been told any specifics, my understanding is that factors that qualify people for this award include activity within the technical community such as contributions to open source projects, answering questions in technology forums, technical writing/blogging, and public speaking at developer conferences are the kind of things one can do to be recognized for this award.  I’ve been an active open source developer since 2004 when I founded the mojoPortal project, and over the years I’ve answered thousands of questions in the mojoPortal forums, but that project is built on older web technology and is not as popular these days as it once was. More recently I’ve founded a new set of projects collectively branded as “cloudscribe” and built on the latest greatest ASP.NET Core stack. I’ve spent the last year immersing myself in this new technology stack while it was in preview and I’ve accumulated about 4500 reputation points on stackoverflow in the past year mostly answering questions related to ASP.NET Core. I’m really excited about ASP.NET Core because it really makes it possible and natural to use design patterns that were difficult or impossible to use in the old WebForms framework and it gives us a truly modern web framework that embraces the web.  The old WebForms framework was really trying to make web development more like desktop application development and it did so by hiding the web in such a way that one could build web applications and sites without really understanding the underlying web technology. Looking back I guess that was a good thing for the early days of the web, but after building web applications for many years you end up learning the web technology anyway and you begin to realize that the old framework was making it harder for you to work directly with the nature of the web because of all the abstractions that have been layered on top if it in the framework. By contrast the new framework embraces the nature of the web and requires you to understand it directly and work with it directly and ultimately that is a good thing.

 

One of the great things about doing open source development and sharing your work is that online communities are global and you make friends worldwide. I have been truly blessed to get to know some really nice people in far flung corners of the world that I would never have met otherwise. In fact I would like to take this moment to offer sincere thanks and gratitude to my friend Guruprasad Balaji, a long time mojoPortal community member from Chennai India who nominated me for this MVP Award. Who knows maybe someday I will get an opportunity to travel to India and other places in the world and get to meet some of my online friends in person.

One of the benefits of the MVP Award that I’m most excited about is that once a year in November, Microsoft holds an MVP Summit and MVPs from all over the world fly in to Microsoft Headquarters in Redmond Washington!  For a few days I will get to attend some insider technology sessions and meet and network with other MVPs! I will get to meet in person with some really smart people I’ve admired online for many years!

Now that I’m a part of this MVP community I intend to do my best to be included for many years to come. To earn my keep, I will of course keep doing open source development and helping people on stackoverflow, but I will also try to be more active in public speaking.  On October 25, 2016 I’m scheduled to give a presentation on ASP.NET Core MVC framework at the Enterprise Developers Guild in Charlotte, NC. I’m very excited about this presentation and I plan to use my latest open source projects to illustrate the concepts I will be presenting with real working code examples. I hope you can attend the event, but even if not, I hope you will take a look at my new open source projects related to ASP.NET Core:

  • cloudscribe Core – a multi-tenant web application foundation providing management of sites (tenants), users, roles, and role membership. Pretty much every web projects needs that kind of thing so this is a foundation to build other things on top of so you don’t have to re-implement that stuff for every new project.
  • cloudscribe Simple Content – a simple yet flexible blog and content engine that can work with or without a database (actually at the time of this writing it only works without a database using my NoDb storage project, but soon I will implement Entity Framework data storage and possibly MongoDb at some point)
  • NoDb – a “no database” file system storage because many projects don’t need an enterprise database. Think of NoDb like a file system document database that stores objects in the file system serialized as json (or in some cases xml). Great for brochure web sites, and great for prototyping new applications.
  • cloudscribe Navigation – MVC components for navigation menus and breadcrumbs, I use this in cloudscribe Core and in cloudscribe SimpleContent but  it is useful for any MVC web project
  • cloudscribe Pagination – MVC TagHelpers for rendering pagination links for multi-page content
  • cloudscribe Syndication – a re-useable RSS feed generator for ASP.NET Core, I use this for the RSS feed in cloudscribe Simple Content but it could be used in other projects easily
  • cloudscribe MetaWeblog – a re-useable implementation of the metaweblog api for ASP.NET Core, I use this in cloudscribe Simple Content to make it possible to author content using Open Live Writer
  • cloudscribe Logging - an implementation of ILogger and ILoggerProvider that logs to the database using a pluggable model supporting multiple data platforms. Also provides an MVC controller for viewing and managing the log data
  • cloudscribe SimpleAuth – simple no-database required user authentication for web applications where only a few people need to be able to login

I’m a little late in blogging about my MVP Award, I found out in early July, but I really wanted to be able to blog about it using my new Simple Content project. Previously this site was running on a really old version of mojoPortal, but I just rebuilt this site using my new Simple Content project for the blog. It took a while to get everything ready because this is the first real world project I’ve done using Simple Content. I fixed a few bugs and implemented a few little improvements as part of getting this site completed, but there is still more to work to complete all the features I’ve planned. This site is using NoDb for storage which gives me an opportunity to prove the viability of building sites without a database. I’ve never really considered myself a web designer, I am primarily a web developer, but using bootstrap makes it easy to put together a professional looking, mobile responsive site. I’m no visual artist but I’m a decent mechanic when it comes to CSS. This is kind of a quick first draft design, I may yet change the color scheme and get a little more creative with it, but I think it is a major improvement over the outdated design of my old web site.

When building cross platform web applications on ASP.NET Core, one may think that since the framework itself is cross platform then your application will just work on any supported platform without having to do anything special.  That is mostly true but there are still some things that one has to consider during development to avoid some problems that can happen due to platform differences. For example file systems on linux are case sensitive and therefore so are urls, so you generally want to standardize on lower case urls. Also if building strings with file system paths, you need to always use Path.Combine or concatenate with Path.DirectorySeparatorChar rather than a back slash since that is Windows specific and forward slash is used on linux/mac.  Additionally Time Zone Ids are different on Windows than on linux/mac which uses IANA time zone ids, and if you try to use the TimeZoneInfo class and you pass in an invalid time zone id for the current platform it results in exceptions. If you are storing time zone ids for users or sites in a database and you migrate the site to a different platform, all of the stored time zone ids could be invalid for the new platform. I ran into this problem in developing my cloudscribe project, because it supports the concept of a site level time zone that is used by default, but an authenticated user may choose their own time zone. The approach I use is to always store dates in the database as UTC and then they can be adjusted to the given time zone for display as needed.

To solve the problem, I turned to the NodaTime project, which provides a comprehensive datetime library that is arguably superior to the DateTime and TimeZoneInfo classes provided in the .NET framework. For applications and features that are focused around dates, such as calendaring and scheduling, I think I would go all in on using the NodaTime classes instead of the built in framework classes, but for more common scenarios it makes sense to me to stick with the standard DateTime class for entity properties, since that will typically map directly to a corresponding database datetime on various database platforms without any friction. So for my purposes, what I wanted was to use NodaTime as a means to standardize on IANA time zone ids, and as a tool to convert back and forth from UTC datetime to various time zones as needed in a way that will work the same on any platform. NodaTime has a built in list of the IANA time zones, and it has the needed functionality for doing the conversions to and from UTC. While I am continuing to use the standard DateTime class for all my date properties on entities, I am no longer using the standard TimeZoneInfo class that is built into the .NET framework for converting dates back and forth, but instead using a little TimeZoneHelper class that I implemented for encapsulating the conversions so that none of my other code needs to know about NodaTime. My TimeZoneHelper also exposes the list of TimeZoneIds provided by NodaTime so I can use it to populate a dropdown list for time zone selection, and I simply store the IANA time zone id in the database for my sites and users.

This TimeZoneHelper class is not very large or complex and I think it would be useful for anyone else who wants a consistent way to handle TimeZones in a platform neutral way. Feel free to use this code in your own projects!

using Microsoft.Extensions.Logging;
using NodaTime;
using NodaTime.TimeZones;
using System;
using System.Collections.Generic;

namespace cloudscribe.Web.Common
{
    public class TimeZoneHelper : ITimeZoneHelper
    {
        public TimeZoneHelper(
            IDateTimeZoneProvider timeZoneProvider,
            ILogger<TimeZoneHelper> logger = null
            )
        {
            tzSource = timeZoneProvider;
            log = logger;
        }

        private IDateTimeZoneProvider tzSource;
        private ILogger log;

        public DateTime ConvertToLocalTime(DateTime utcDateTime, string timeZoneId)
        {
            DateTime dUtc;
            switch(utcDateTime.Kind)
            {
                case DateTimeKind.Utc:
                dUtc = utcDateTime;
                    break;
                case DateTimeKind.Local:
                    dUtc = utcDateTime.ToUniversalTime();
                    break;
                default: //DateTimeKind.Unspecified
                    dUtc = DateTime.SpecifyKind(utcDateTime, DateTimeKind.Utc);
                    break;
            }

            var timeZone = tzSource.GetZoneOrNull(timeZoneId);
            if (timeZone == null)
            {
                if(log != null)
                {
                    log.LogWarning("failed to find timezone for " + timeZoneId);
                }
               
                return utcDateTime;
            }

            var instant = Instant.FromDateTimeUtc(dUtc);
            var zoned = new ZonedDateTime(instant, timeZone);
            return new DateTime(
                zoned.Year,
                zoned.Month,
                zoned.Day,
                zoned.Hour,
                zoned.Minute,
                zoned.Second,
                zoned.Millisecond,
                DateTimeKind.Unspecified);
        }

        public DateTime ConvertToUtc(
            DateTime localDateTime,
            string timeZoneId,
            ZoneLocalMappingResolver resolver = null
            )
        {
            if (localDateTime.Kind == DateTimeKind.Utc) return localDateTime;

            if (resolver == null) resolver = Resolvers.LenientResolver;
            var timeZone = tzSource.GetZoneOrNull(timeZoneId);
            if (timeZone == null)
            {
                if (log != null)
                {
                    log.LogWarning("failed to find timezone for " + timeZoneId);
                }
                return localDateTime;
            }

            var local = LocalDateTime.FromDateTime(localDateTime);
            var zoned = timeZone.ResolveLocal(local, resolver);
            return zoned.ToDateTimeUtc();
        }

        public IReadOnlyCollection<string> GetTimeZoneList()
        {
            return tzSource.Ids;
        }
    }
}

You'll notice that this helper has some constructor dependencies, you can wire those up to be injected in the ConfigureServices method of your Startup class like this:

services.TryAddSingleton<IDateTimeZoneProvider>(new DateTimeZoneCache(TzdbDateTimeZoneSource.Default));
services.TryAddScoped<ITimeZoneHelper, TimeZoneHelper>();

Note that I made an interface ITimeZoneHelper, that this class implements, which would allow me to plugin different logic if I ever need or want to but you could simply remove the interface declaration if you use this in your own project. I hope you find this code useful!

If you are porting web applications to ASP.NET Core or building new web applications, you may notice that the System.Net.Mail namespace is not implemented in .NET Core. My understanding is that there are intentions to implement that namespace later, but it may be a little stumbling block for early adopters, since sending email is a very fundamental thing that most web applications need to do for common tasks such as verifying an email address for a new account or facilitating password reset.

There are some examples out there for sending email with various 3rd party services such as SendGrid, MailGun, Elastic Email, and the like, by using REST APIs to send the mail instead of using SMTP, and that is certainly a good option to consider. But for those who already have an SMTP server that they want to use, a better solution is needed. The good news is a better solution already exists and it works currently with RC1 of ASP.NET Core and will most surely also be available for RC2 and later releases.

There are actually 2 related projects that you should know about, MailKit and MimeKit. The goal of the MailKit project is “to provide the .NET world with robust, fully featured and RFC-compliant SMTP, POP3, and IMAP client implementations”, and indeed it meets that goal and in many ways is actually more powerful and flexible than the traditional System.Net.Mail components.

Here I will show some working example code that comes from my cloudscribe Core project. More specifically this example code is from my EmailSender class which is a work in progress, but functional enough to be a good example.

Lets start with a simple class to represent and encapsulate the settings we need to connect and authenticate with an SMTP Server:

public class SmtpOptions
{
    public string Server { get; set; } = string.Empty;
    public int Port { get; set; } = 25;
    public string User { get; set; } = string.Empty;
    public string Password { get; set; } = string.Empty;
    public bool UseSsl { get; set; } = false;
    public bool RequiresAuthentication { get; set; } = false;
    public string PreferredEncoding { get; set; } = string.Empty;
}

You would new up one of these and set the properties according to your SMTP server configuration, then you would pass that in as one of the parameters to an EmailSender class that looks something like this:

using MailKit.Net.Smtp;
using MimeKit;
using System;
using System.Threading.Tasks;

namespace cloudscribe.Messaging.Email
{

    public class EmailSender
    {
        public EmailSender()
        {
        }

        public async Task SendEmailAsync(
            SmtpOptions smtpOptions,
            string to,
            string from,
            string subject,
            string plainTextMessage,
            string htmlMessage,
            string replyTo = null)
        {
            if (string.IsNullOrWhiteSpace(to))
            {
                throw new ArgumentException("no to address provided");
            }

            if (string.IsNullOrWhiteSpace(from))
            {
                throw new ArgumentException("no from address provided");
            }

            if (string.IsNullOrWhiteSpace(subject))
            {
                throw new ArgumentException("no subject provided");
            }

            var hasPlainText = !string.IsNullOrWhiteSpace(plainTextMessage);
            var hasHtml = !string.IsNullOrWhiteSpace(htmlMessage);
            if (!hasPlainText && !hasHtml)
            {
                throw new ArgumentException("no message provided");
            }

            var m = new MimeMessage();
          
            m.From.Add(new MailboxAddress("", from));
            if(!string.IsNullOrWhiteSpace(replyTo))
            {
                m.ReplyTo.Add(new MailboxAddress("", replyTo));
            }
            m.To.Add(new MailboxAddress("", to));
            m.Subject = subject;

            //m.Importance = MessageImportance.Normal;
            //Header h = new Header(HeaderId.Precedence, "Bulk");
            //m.Headers.Add()

            BodyBuilder bodyBuilder = new BodyBuilder();
            if(hasPlainText)
            {
                bodyBuilder.TextBody = plainTextMessage;
            }

            if (hasHtml)
            {
                bodyBuilder.HtmlBody = htmlMessage;
            }

            m.Body = bodyBuilder.ToMessageBody();
           
            using (var client = new SmtpClient())
            {
                await client.ConnectAsync(
                    smtpOptions.Server,
                    smtpOptions.Port,
                    smtpOptions.UseSsl)
                    .ConfigureAwait(false);
               
                // Note: since we don't have an OAuth2 token, disable
                // the XOAUTH2 authentication mechanism.
                client.AuthenticationMechanisms.Remove("XOAUTH2");

                // Note: only needed if the SMTP server requires authentication
                if(smtpOptions.RequiresAuthentication)
                {
                    await client.AuthenticateAsync(smtpOptions.User, smtpOptions.Password)
                        .ConfigureAwait(false);
                }
               
                await client.SendAsync(m).ConfigureAwait(false);
                await client.DisconnectAsync(true).ConfigureAwait(false);
            }

        }

        public async Task SendMultipleEmailAsync(
            SmtpOptions smtpOptions,
            string toCsv,
            string from,
            string subject,
            string plainTextMessage,
            string htmlMessage)
        {
            if (string.IsNullOrWhiteSpace(toCsv))
            {
                throw new ArgumentException("no to addresses provided");
            }

            if (string.IsNullOrWhiteSpace(from))
            {
                throw new ArgumentException("no from address provided");
            }

            if (string.IsNullOrWhiteSpace(subject))
            {
                throw new ArgumentException("no subject provided");
            }

            var hasPlainText = !string.IsNullOrWhiteSpace(plainTextMessage);
            var hasHtml = !string.IsNullOrWhiteSpace(htmlMessage);
            if (!hasPlainText && !hasHtml)
            {
                throw new ArgumentException("no message provided");
            }

            var m = new MimeMessage();
            m.From.Add(new MailboxAddress("", from));
            string[] adrs = toCsv.Split(',');

            foreach (string item in adrs)
            {
                if (!string.IsNullOrEmpty(item)) { m.To.Add(new MailboxAddress("", item)); ; }
            }

            m.Subject = subject;
            m.Importance = MessageImportance.High;
          
            BodyBuilder bodyBuilder = new BodyBuilder();
            if (hasPlainText)
            {
                bodyBuilder.TextBody = plainTextMessage;
            }

            if (hasHtml)
            {
                bodyBuilder.HtmlBody = htmlMessage;
            }

            m.Body = bodyBuilder.ToMessageBody();

            using (var client = new SmtpClient())
            {
                await client.ConnectAsync(
                    smtpOptions.Server,
                    smtpOptions.Port,
                    smtpOptions.UseSsl).ConfigureAwait(false);
               
                // Note: since we don't have an OAuth2 token, disable
                // the XOAUTH2 authentication mechanism.
                client.AuthenticationMechanisms.Remove("XOAUTH2");

                // Note: only needed if the SMTP server requires authentication
                if (smtpOptions.RequiresAuthentication)
                {
                    await client.AuthenticateAsync(
                        smtpOptions.User,
                        smtpOptions.Password).ConfigureAwait(false);
                }

                await client.SendAsync(m).ConfigureAwait(false);
                await client.DisconnectAsync(true).ConfigureAwait(false);
            }

        }

    }
}

Note that I’ve implemented 2 methods here, one that sends email to a single recipient and one that takes a comma separated list of recipients. This code could probably be refactored a bit to reduce duplication, I actually plan to implement more overloads for handling things like attachments. This is just an initial working stub that I plan to evolve as I encounter more varied needs in my project. Note that you can pass in either or both an html formatted message or plain text, but you must of course at least pass in one of them. I’ve left a few comments in the code to show how things like message importance can be set, but really I’ve only scratched the surface of what MailKit/MimeKit can do for you, so I encourage you to explore the available api surface of those projects.

Feel free to borrow this code and use it in your own projects, and I hope you will also take a look at my various open source projects on github that may be of use or value to you on your projects.

Happy Coding!!!