Friday, August 11, 2017

Tales of Dynamics GP backups and ransomware

By Steve Endow

At the excellent Dynamics GP Tech 2017 Conference this week, I heard a few interesting stories about ransomware at Dynamics GP customers.

One partner told me a very interesting story about ransomware at a customer that encrypted everything, including the customer's Dynamics GP database backups.  The Dynamics GP partner was called in and he assessed the catastrophe.  Nothing was recoverable.

But he noticed something strange.  Dynamics GP was still working.  He logged into the SQL Server, and he saw that the Dynamics GP databases were still intact and were not encrypted.  He speculated that because SQL Server tenaciously locks the MDF and LDF files, the ransomware was apparently unable to encrypt the live database files.

He was able to stop the SQL Service, quickly copy all of the database files, and attach them on a clean SQL Server.  Luckily, that copy process worked and the ransomware was either inactive at that point, or it didn't have time to encrypt the unlocked database files.  In hindsight, I think I would probably first try doing full backups of all of the databases to ensure the MDF and LDF files remained locked, but saving the backup files to a clean location that can't be accessed by the ransomware would probably still be tricky.

Next, during her "Microsoft Azure: Infrastructure, Disaster Recovery, and Backups", Windi Epperson shared some harrowing stories about tornadoes in Oklahoma.

Some of Windi's customers have had entire buildings vaporized by a tornado, so even the best on-site backup would have been insufficient.  Windi discussed the Azure Backup service, which I didn't even know about, as a flexible and economical way to get all types of backups off site.  She also demonstrated the Dynamics GP backup to Azure feature that she recommends for small customers who don't have the IT staff to handle off site backups.

I currently have a lot of my data backed up on Backblaze S2 storage through my Synology NAS device, but that is only through a connected sync process, and is not a true archive backup.  I've been looking for a more traditional disconnected off site backup storage service that is reasonably priced, so I'm going to look into Azure backup and see if I can setup a process that can automatically backup what I need.

You can also find him on Twitter, YouTube, and Google+

Adding or dropping SQL indexes temporarily on production database tables

By Steve Endow

During one of my presentations on Optimizing SQL Scripting at the GP Tech 2017 Conference this week at the Microsoft campus in Fargo, one of the attendees asked an interesting question:

Suppose you need to run a complex query just a few times, and you find that the query would benefit greatly from adding new indexes to one or more tables.  Rather than adding 'permanent' indexes, would it be prudent to temporarily add the indexes so that you could run the query faster, and then remove the indexes when you no longer need them?

I think it is great question. I immediately thought of one likely real world scenario for this, and coincidentally, I shared a Lyft back to the airport with a consultant who described a second scenario where a slightly different index management process was required.

Scenario 1:  Imagine that at the end of each financial quarter, dozens of large, complex financial and sales analysis reports are run against dozens of Dynamics GP company databases.  If some reports take a minute to run, and a few indexes can be added to reduce the report run times to a few seconds, that time savings could really add up.  I could definitely see the value of adding indexes to speed up this process.

But is it worth adding permanent indexes to tables to support the quarterly reports?  Or is it better to add the indexes once per quarter, run the reports, and then remove the indexes?

I don't currently know how to assess the actual costs vs. benefits of those situations, but given that Dynamics GP is already drowning in SQL indexes, and given that the indexes may be dropped during a GP upgrade (and it's easy to forget to recreate them), I think that creating the indexes temporarily seems like a reasonable solution for this hypothetical example.

The one concern I expressed about the solution was the potential for the CREATE INDEX process to lock the tables as they were being built.

I did some research, and confirmed my concern that a table will be locked and inaccessible during the CREATE INDEX process.

This is mentioned in the "Performance Considerations" section of this Books On Line page:

Most Dynamics GP customers use SQL Server Standard Edition, so indexes are created "offline", and the table is locked until the create index operation completes.

SQL Server Enterprise edition does have an "online" indexing option, but from what I have been able to find, even that feature doesn't provide 100% accessibility of the table during the indexing operation, so there may be some challenges in very high volume environments.

If the temporary indexes make sense, my recommendation would be to add the indexes during a maintenance window, such as late at night, and then run the queries the next day (or next few days), and then remove the index when they are no longer needed.

Scenario 2:  A Dynamics GP consultant told me a story about a prior job where he had to bulk load millions of records into a table on a regular basis.  The bulk load had a very limited time window, so the import had to be completed as quickly as possible.

In order to speed up the import process, they dropped the indexes on the table, imported the millions of additional records into the table, and then added the indexes back to the table.  I hadn't considered that scenario before, but he explained it worked very well.

I was able to find this Books Online article about it and recommendations on when to drop or not drop indexes for bulk load operations.  It provides recommendations depending on whether the table is empty or not, and how much new data is being imported.

So I learned a few interesting things myself during my session.  Hope this was helpful!

You can also find him on Twitter, YouTube, and Google+

Wednesday, June 21, 2017

Common GP integration error: Could not load file or assembly 'Microsoft.Dynamics.GP.eConnect, Version=XX.0.0.0

By Steve Endow

If you have a .NET integration for Dynamics GP that uses the eConnect .NET assemblies, this is a fairly common error:

Could not load file or assembly 'Microsoft.Dynamics.GP.eConnect, Version=

Could not load file or assembly 'Microsoft.Dynamics.GP.eConnect, Version=

Could not load file or assembly 'Microsoft.Dynamics.GP.eConnect, Version=

This usually indicates that the integration was compiled with an older (or different) version of the eConnect .NET assemblies.

Why does this happen?

In my experience, there are two situations where you will usually see this.

1. You upgraded Dynamics GP to a new version, but forgot to update your .NET eConnect integrations.  For instance, if you upgraded from GP 2013 to GP 2016, you would see the "Version 12" error message when you run your integration, as the integration is still trying to find the GP 2013 version of eConnect.

2. You are working with an application or product that is available for multiple versions of GP, and the version you have installed doesn't match your GP version

The good news is that this is simple to resolve.  In the first case, the developer just needs to update the Visual Studio project to point to the proper version of the eConnect DLLs.  Updating the .NET project shouldn't take very long--maybe 1-4 hours to update and test, depending on the complexity of the integration.  Or if you're using product, you just need to get the version of the integration that matches your GP version.

If you have a custom .NET integration, the potential bad news is that you, or your developer, or your GP partner, needs to have the .NET source code to update the integration.  Some customers encounter this error when they upgrade to a new version of GP, and realize that the developer who wrote the code left the company 3 years ago and they don't know where the source code might be.  Some customers change GP partners and didn't get a copy of the source code from their prior partner.

If you can't get a copy of the source code, it is theoretically possible to decompile most .NET applications to get some or most of the source code, but in my limited experience as a novice user of such tools, decompilation just doesn't provide a full .NET project that can be easily updated and recompiled.  Or if it does, the code is often barely readable, and would be very difficult to maintain without a rewrite.

You can also find him on Twitter, YouTube, and Google+

Saturday, June 10, 2017

ASP.NET Core and EF Core with Dynamics GP: Trim trailing spaces from char fields

By Steve Endow

UPDATE: A kind soul read this post, nodded his head in commiseration, and then offered a better solution.  See option 5 below for the best approach I've seen so far.

Anyone who has written SQL queries, integrated, or otherwise had to deal with Dynamics GP data certainly has warm feelings about the colonial era use of char data type for all string fields.

This has the lovely side effect of returning string values with trailing spaces that you invariably have to deal with in your query, report, application, XML, JSON, etc.

In the world of SQL queries, you can spot a Dynamics GP consultant a mile away by their prolific use of the RTRIM function in SQL queries.  .NET developers will similarly have Trim() statements thoroughly coating their data access code.

But in this bold new age of Microsoft development tools, where everything you have spent years learning and mastering is thrown out the window, those very simple solutions aren't readily available.

I am developing an ASP.NET Core web API for Dynamics GP, and being a sucker for punishment, I'm also using EF Core for data access.  In one sense, EF Core is like magic--you just create some entities, point it to your database, and presto, you've got data.  Zero SQL.  That's great and all if you have a nice, modern, clean, well designed database that might actually use the space age varchar data type.

But when you're dealing with a relic like a Dynamics GP database, EF Core has some shortcomings.  It isn't really designed to speak to a prehistoric database.  Skipping past the obvious hassles, like exposing the cryptic Dynamics GP field names, one thing you'll notice is that it dutifully spits out the char field values with trailing spaces in all of their glory.

When you convert that to JSON, you get this impolite response:

"itemnmbr": "100XLG                         ",
"itemdesc": "Green Phone                                                                                          ",

"itmshnam": "Phone          "

Yes, they're just spaces, and it's JSON--not a report output, so it's not the end of the world.  But in addition to looking like a mess, the spaces are useless, bloat the response, and may have to be trimmed by the consumer to ensure no issues on the other end.

So I just spent a few hours trying to figure out how to deal with this.  Yes, SpaceX is able to land freaking rockets on a floating barge in the middle of the ocean, while I'm having to figure out how to get rid of trailing spaces.  Sadly, I'm not the only one--this is a common issue for many people.

So how can we potentially deal with this?

1. Tell EF Core to trim the trailing spaces.  As far as I can tell, this isn't possible as of June 2017 (v1.1.1).  EF Core apparently doesn't have a mechanism to call a trim function, or any function, at the field level. It looks like even the full EF 6.1+ framework didn't support this, and you had to write your own code to handle it--and that code doesn't appear to work in EF Core as far as I can tell.

2. Tell ASP.NET Core to trim the trailing spaces, somewhere, somehow.  There may be a way to do this in some JSON formatter option, but I couldn't find any clues as to how.  If someone has a clever way to do this, I'm all ears, and I'll buy you a round at the next GP conference.

3. Use the Trim function in your class properties.  Ugh.  No.  This would involve using the old school method of adding backer fields to your DTO class properties and using the Trim function on every field. This is annoying in any situation, but to even propose this with ASP.NET Core and EF Core seems like sacrilege.  And if you have used scaffolding to build out your classes from an existing database, this is just crazy talk.  I'm not going to add hundreds of backer fields to hundreds of string properties and add hundreds of Trim calls.  Nope.

4. Use an extension method or a helper class.  This is what I ended up doing trying initially.  (see option 5 for a better solution)  This solution may seem somewhat obvious, but in the world of ASP.NET Core and EF Core, this feels like putting wagon wheels on a Tesla.  It's one step up from adding Trim in your classes, but looping through object properties and trimming every field is far from high tech.  Fortunately it was relatively painless, requires very minimal code changes, and is very easy to rip out if a better method comes along.

There are many ways to implement this, but I used the code from this post:

I created a new static class called TrimString, and I added the static method to the class.

    public static class TrimStrings
        public static TSelf TrimStringProperties<TSelf>(this TSelf input)
            var stringProperties = input.GetType().GetProperties()
                .Where(p => p.PropertyType == typeof(string));

            foreach (var stringProperty in stringProperties)
                string currentValue = (string)stringProperty.GetValue(input, null);
                if (currentValue != null)
                    stringProperty.SetValue(input, currentValue.Trim(), null);
            return input;

I then modified my controller to call TrimStringProperties before returning my DTO object.

    var item = _itemRepository.GetItem(Itemnmbr);

    if (item == null)
        return NotFound();
    var itemResult = Mapper.Map<ItemDto>(item);
    itemResult = TrimStrings.TrimStringProperties(itemResult);

    return Ok(itemResult);

And the new JSON output:

  "itemnmbr": "100XLG",
  "itemdesc": "Green Phone",
  "itmshnam": "Phone",
  "itemtype": 1,

  "itmgedsc": "Phone",

Fortunately this works, it's simple, and it's easy.  I guess that's all that I can ask for.

When using ASP.NET Core and EF Core, you end up using a lot of List<> objects, so I created an overload of TrimStringProperties that accepts a List.

    public static List<TSelf> TrimStringProperties<TSelf>(this List<TSelf> input)
        List<TSelf> result = new List<TSelf>();
        TSelf trimmed;

        foreach (var obj in input)
            trimmed = TrimStringProperties(obj);
        return result;


5.  Use AutoMapper!  Less than 24 hours after posting this article, kind reader Daniel Doyle recognized the problem I was trying to solve and keenly observed that I was using AutoMapper.  AutoMapper is a brilliant NuGet package that automates the mapping and transfer of data from one object to another similar object.  I have highlighted the call in green above.

Daniel posted a comment below and suggested using AutoMapper to trim the string values as it performed its mapping operation--something I would have never thought of, as this is the first time I've used it.  After some Googling, it looks like AutoMapper is commonly used for this type of data cleanup, and it seems like it is well suited to the task.  It's processing all of the object data as it maps from one class to another, so it seems like a great time to clean up the trailing spaces on the strings.

Daniel suggested a syntax that uses the C# "null coalescing operator" (double question marks), which makes the statement an extremely compact single line of code.

And with a single line added to my AutoMapper configuration block in Startup.cs, all of my objects will have the trailing spaces trimmed automatically.  No extra looping code, no extra method calls in each of my Controllers every time I want to return data.

    AutoMapper.Mapper.Initialize(cfg =>
        cfg.CreateMapItem, Models.ItemDto>();
        cfg.CreateMapItemSite, Models.ItemSiteDto>();
        cfg.CreateMapSalesOrder, Models.SalesOrderDto>();
        cfg.CreateMapSalesOrderLine, Models.SalesOrderLineDto>();
        cfg.CreateMapSalesOrderResponse, Models.SalesOrderResponseDto>();
        //Trim strings
        cfg.CreateMap<string, string>().ConvertUsing(str => (str ?? string.Empty).Trim());


This is super clean, and very easy.  No more wagon wheels on the Tesla!

Steve Endow is a Microsoft MVP for Dynamics GP and a Dynamics GP Certified IT Professional in Los Angeles.  He is the owner of Precipio Services, which provides Dynamics GP integrations, customizations, and automation solutions.

You can also find him on Twitter, YouTube, and Google+

Changing Visual Studio keyboard shortcut for Comment and Uncomment

By Steve Endow

A very handy feature in Visual Studio is the Comment / Uncomment editing option.

There are two buttons that allow you to comment or uncomment code with a single click.

While those buttons are handy, they require you to use the mouse, and that can sometimes be tedious if you are having to also make multiple code selections with the mouse.

Visual Studio does have keyboard shortcuts for Comment and Uncomment, but they are the unfortunate double-shortcut combinations:  Ctrl+K, Ctrl+C to comment, and Ctrl+K, Ctrl+U to uncomment.

I find those shortcuts to be pretty annoying, as they require me to use both hands to press those key combinations.  It's not much of a "shortcut".

After several years of this nagging me, I finally bothered to lookup a better alternative.  Fortunately Visual Studio allows you to add your own keyboard shortcuts.  If you click on Tools -> Options, and then select Environment -> Keyboard, you can select a command and assign a new keyboard shortcut.

The one challenge is finding a decent keyboard shortcut that isn't already taken.

I entered the word "comment" and it displayed the relevant commands.  I then selected Edit.CommentSelection, selected Use new shortcut in Text Editor, pressed Alt+C, then clicked Assign.

Now I can comment a selection using the nice and simple Alt+C shortcut.  Big improvement.

I don't Uncomment as much, so for now I haven't assigned a custom shortcut to Edit.Uncomment, but at least I now know it's very easy to do.

Keep on coding...and commenting...

You can also find him on Twitter, YouTube, and Google+

Thursday, June 1, 2017

Filtering log entries in ASP.NET ILoggerFactory logs

By Steve Endow

I have two new projects that require web service APIs, so rather than actually use a tried and true tool that I am familiar with to develop these new projects, I am plunging into the dark depths of ASP.NET Core.

If you've played with ASP.NET Core, you may have noticed that Microsoft has decided that everything you have learned previously about developing web apps and web services should be discarded, making all of your prior knowledge and experience worthless.  And if you choose to venture in to new world of ASP.NET Core, you will be rewarded by not knowing how to do anything.   At all.  Awesome, can't wait!

One of those things that you'll likely need to re-learn from scratch is logging.  ASP.NET Core has a native logging framework, so rather than write your own or use a third party logging package, you can now use a built-in logger.  This sounds good, right?

Not so fast.  At this point, I have come to understand that nothing is easy or obvious with ASP.NET Core.

This article provides a basic overview showing how to perform logging in ASP.NET Core.

One thing it doesn't clearly explain is that if you want to have your logs capture Information level entries, it will quickly be filled with hundreds of entries from the ASP.NET Core engine / web server itself.  You will literally be unable to find your application entries in the log file if you log at the Information level.

So the article helpfully points out that ILoggerFactory supports filtering, allowing you to specify that you only want warnings or errors from the Microsoft tools/products, while logging Information or even Debug messages from your application.

You just add this .WithFilter section to your startup.cs Configure method:

loggerFactory .WithFilter(new FilterLoggerSettings { { "Microsoft", LogLevel.Warning }, { "System", LogLevel.Warning }, { "ToDoApi", LogLevel.Debug } })

Cool, that looks easy enough.

Except after I add that to my code, I see the red squigglies of doom:

Visual Studio 2017 is indicating that it doesn't recognize FilterLoggerSettings. At all.

Based on my experience with VS 2017 so far, it seems that it has lost the ability (that existed in VS 2015) to identify missing NuGet packages.  If you already have a NuGet package installed, it can detect that you need to add a using statement to your class, but if you don't have the NuGet package installed, it can't help you.  Hopefully this functionality is added back to VS 2017 in a service pack.

After many Google searches, I finally found this StackOverflow thread, and hidden in one of the post comments, someone helpfully notes that the WithFilter extension requires a separate NuGet package, Microsoft.Extensions.Logging.Filter.  If you didn't know that, you'd spend 15 minutes of frustration, like I did, wondering why the very simple Microsoft code sample doesn't work.

Once you add the Microsoft.Extensions.Logging.Filter NuGet package to your project, Visual Studio will recognize both WithFilter and FilterLoggerSettings.

And here is my log file with an Information and Warning message, but no ASP.NET Core messages.

And several wasted hours later, I am now able to read my log file and actually work on the real project code.

Best of luck with ASP.NET Core.  You'll need it.

You can also find him on Twitter, YouTube, and Google+

Thursday, May 11, 2017

Resolving the Veeam "Backup files are unavailable" message when restoring a VM

By Steve Endow

I'm a huge fan of the Veeam Backup and Replication product.  I've used it for several years now to backup my Hyper-V virtual machines to a Synology NAS, and it has been a huge improvement over the low-tech script based VM backups I was suffering with previously.

One quirk I have noticed with Veeam is that it seems to be very sensitive to any loss of connectivity with the backup infrastructure.  With a prior version, if Veeam was running but my file server was shut down, I would get error notifications indicating that it couldn't access the file server--even though backups were not scheduled to run.  I haven't noticed those messages lately, so I'm not sure if I just turned them off, or if I haven't been paying attention to them.

Since I don't need my servers running 24x7, I have them scheduled to shutdown in the evening, and then automatically turn on in the morning.  But sometimes if I wrap up my day early, I may shut down all of my servers and my desktop at, say, 8pm.  If I shut down my Synology NAS first, and Veeam detects that the file server is not accessible, it may log a warning or error.

Normally, this isn't a big deal, but I found one situation where this results in a subsequent error message.  I recently tried to restore a VM, and after I selected the VM to restore and chose a restore point, I received this error message.

Veeam Error:  Backup files are unavailable

When I first saw this message I was concerned there was a problem, but it didn't make sense because Veeam was obviously able to see the backup files and it even let me choose which restore point I wanted.  So I knew that the backup files were available and were accessible.

I could access the network share on my NAS file server and browse the files without issue.  I was able to click on OK to this error message, complete the restore wizard, and successfully restore my VM.  So clearly the backup files were accessible and there wasn't really an issue.

So why was this error occurring?

I submitted a support case to Veeam and spoke with a support engineer who showed me how to resolve this error.  It seems that whenever Veeam is unable to access the file share used in the Backup Infrastructure setting, it sets a flag or an error state in Veeam to indicate that the backup location is not available.  After this happens, you have to manually tell Veeam to re-scan the backup infrastructure in order to clear the error.  Fortunately, this is very simple and easy.

In Veeam, click on the Backup Infrastructure button in the bottom left, then click on the Backup Repositories page.  Right click on the Backup Repository that is giving the error, and select Rescan.

The Rescan will take several seconds to run, and when it is done, the "Backup files are unavailable" message will no longer appear when you perform a restore.  Or at least that worked for me.

Overall, I'm incredibly pleased with Veeam Backup and Replication and would highly recommend it if it's within your budget.

You can also find him on Twitter, YouTube, and Google+