Skip to content

Epic life quest

Today I want to talk a bit about my goals in life, and how I organise them.

Some time ago I cam across the idea of epic life quests. I originally heard about them from Brent Ozar, who had originally heard about them from Steve Kamb, and I liked the idea behind them. I started to work on my own and about 5 years ago I started writing it down. The full list of everything on my quest is here. I realise I’m probably never going to achieve everything on this list, but it helps me keep a certain amount of focus on things that matter to me long-term.

I broke my goals up into various sections, including travel (because I love to travel and have a lot of places I want to see), and hobbies (because it’s important for me to keep doing the things I enjoy and not just working), as well as some work goals.

A couple of years after that, when it was coming close to the end of December, I started to think about what my New Year’s Resolutions would be that year. I was looking at the usual stuff, like lose weight, exercise more, learn something new, when I realised those goals sucked. They weren’t measurable, they never motivated me, they usually couldn’t be broken down into smaller chunks, and largely they were just vague aspirations to be better at some stuff. I decided that year I wasn’t going to do resolutions, I was going to set myself some New Year’s Goals.

I did that, and pulled some of them from my epic life quest and made up a couple more that were just things I needed to get done in the year to stop myself from backsliding, and came up with 6 goals. I hit 3 of them at the end of the year, and although a 50% success rate wasn’t ideal, it was dramatically better than my usual 0% success rate on resolutions. I decided to do that again in the next year, and set myself 10 goals. Unfortunately that was last year, and we all know what a dumpster fire 2020 turned out to be, but I still achieved some of them which was, again, better than nothing.

So, this year I’ve carried that on. So far I’m not doing great but that’s because the New Stars of Data talk happened and interrupted a lot of my planning. I still feel like I’m on course to achieve quite a few of these:

  • Tidy up and create some ‘branding’ for my professional presence e.g. this blog, potential talks etc.
    • I think the content on this blog is alright, but it could probably use a facelift
    • The about me page needs an update, and I need to create a couple more pages as well
    • When I update the look of the blog it would be good to update the presentations I give to have a similar look
  • Have a painted 40k combat patrol
    • One of my nerd hobbies, I’ve always loved the Games Workshop games and now seems like a good time to get back into them as I live just down the road from their HQ.
  • Have a painted blood bowl team/underworlds warband
    • More Games Workshop stuff
  • Blog once a week for 6 months solid
    • Last year I did this for 3 months, this year I’m pushing for 6
  • Weigh under 95kg every day for 3 months
  • Sort out the garden
    • Not on my epic quest
    • Garden is a state, and I need to properly attack it at some point or I will get in trouble with the letting agency
    • Does potentially get me closer to having my ‘garden party’ garden, which is on my quest, but only if I get the full garden party setup before I move
  • Earn the Azure Data Engineer certification
    • This is almost certainly getting dropped to make room for the New Stars of Data talk
  • Finish my novel
  • Host a proper Halloween party
    • This means everyone dresses up, appropriately spooky decorations and snacks
  • Frame and hang all holiday pictures
    • I’ve got quite a few paintings etc. from different holidays and I want to get them all framed and hung up around the house
    • Not on my epic quest
  • Stretch goal if Covid allows – Jordan desert trek
    • Not happening this year
    • Plan was to trek through the Jordan desert to Petra and the Dead Sea
    • New plan is to do a combined Egypt/Jordan holiday next spring when that will hopefully be more practical
    • This will tick a few goals off the holiday quests

SSDT database projects: how to write check constraints

I came across an interesting quirk of database projects a few days ago. More specifically, a quirk when you come to publish a project that includes check constraints.

I had a few tables with check constraints added to different columns to specify that only particular values could be used in those columns. It looked something like the following:

CREATE TABLE [CheckConstraintExamples].Customer
(
    [CustomerID] INT NOT NULL, 
    [Forename] VARCHAR(50) NOT NULL, 
    [Surname] VARCHAR(50) NOT NULL, 
    [Email] VARCHAR(200) NOT NULL, 
    [CustomerType] VARCHAR(20) NOT NULL, 
    [CustomerLoyalty] VARCHAR(20) NOT NULL, 
    [CustomerValue] VARCHAR(20) NOT NULL, 
    CONSTRAINT [PK_Customer] PRIMARY KEY ([CustomerID]), 
    CONSTRAINT [CK_Customer_CustomerType] CHECK
        (CustomerType IN (
                               'Basic'
                             , 'Premium'
                             , 'Platinum')), 
    CONSTRAINT [CK_Customer_CustomerLoyalty] CHECK
        (CustomerLoyalty IN (
                                  'Low'
                                , 'Medium'
                                , 'High'
                                , 'Lapsed')), 
    CONSTRAINT [CK_Customer_CustomerValue] CHECK
        (CustomerValue IN (
                                'Low'
                              , 'Medium'
                              , 'High'
                              , 'Superspender'
                              , 'None'))
);

That all looks ok, and everything published fine, and kept on publishing fine until I happened to check the generated script for some of the publishes (I was in the early stages of development at this point so deployments were just being done through Visual Studio direct to my machine).

When I did I noticed that every deployment I was getting code like this:



GO
PRINT
N'Dropping [CheckConstraintExamples].[CK_Customer_CustomerLoyalty]...';


GO
ALTER TABLE [CheckConstraintExamples].[Customer]
DROP CONSTRAINT [CK_Customer_CustomerLoyalty];


GO
PRINT
N'Dropping [CheckConstraintExamples].[CK_Customer_CustomerType]...';


GO
ALTER TABLE [CheckConstraintExamples].[Customer]
DROP CONSTRAINT [CK_Customer_CustomerType];


GO
PRINT
N'Dropping [CheckConstraintExamples].[CK_Customer_CustomerValue]...';


GO
ALTER TABLE [CheckConstraintExamples].[Customer]
DROP CONSTRAINT [CK_Customer_CustomerValue];


GO
PRINT
N'Creating [CheckConstraintExamples].[CK_Customer_CustomerLoyalty]...';


GO
ALTER TABLE [CheckConstraintExamples].[Customer]
WITH NOCHECK
ADD CONSTRAINT [CK_Customer_CustomerLoyalty]
CHECK (CustomerLoyalty IN (
                                'Low'
                              , 'Medium'
                              , 'High'
                              , 'Lapsed'));


GO
PRINT
N'Creating [CheckConstraintExamples].[CK_Customer_CustomerType]...';


GO
ALTER TABLE [CheckConstraintExamples].[Customer]
WITH NOCHECK
ADD CONSTRAINT [CK_Customer_CustomerType]
CHECK (CustomerType IN (
                             'Basic'
                           , 'Premium'
                           , 'Platinum'));


GO
PRINT
N'Creating [CheckConstraintExamples].[CK_Customer_CustomerValue]...';


GO
ALTER TABLE [CheckConstraintExamples].[Customer]
WITH NOCHECK
ADD CONSTRAINT [CK_Customer_CustomerValue]
CHECK (CustomerValue IN (
                              'Low'
                            , 'Medium'
                            , 'High'
                            , 'Superspender'
                            , 'None'));


GO
PRINT
N'Checking existing data against newly created constraints';


GO
USE [$(DatabaseName)];


GO
ALTER TABLE [CheckConstraintExamples].[Customer]
WITH CHECK
CHECK CONSTRAINT [CK_Customer_CustomerLoyalty];

ALTER TABLE [CheckConstraintExamples].[Customer]
WITH CHECK
CHECK CONSTRAINT [CK_Customer_CustomerType];

ALTER TABLE [CheckConstraintExamples].[Customer]
WITH CHECK
CHECK CONSTRAINT [CK_Customer_CustomerValue];

That’s dropping all my constraints, recreating them using WITH NOCHECK, and then using WITH CHECK to check the existing data. Obviously all the data will pass those checks, but that’s going to add some time to my deployments, and as the size of the data and the number of checks both increase, these checks will take more and more time.

So, what’s going on? The answer was pretty simple really, when you create a table with check constraints like the ones I’ve used, SQL Server doesn’t use the IN when generating the table definition. Instead it breaks that out into a series of OR predicates, and when you deploy again the deployment process doesn’t recognise that the OR predicates are the same as the single IN predicate. Because it thinks the constraints are different, it drops the existing one and creates a new one based on the code in your project. Then the SQL Server engine transforms the IN predicate in the new constraint into a series of OR predicates and the cycle begins again.

The only solution is to re-write the CHECK CONSTRAINTs to use OR instead of IN, like this:

CREATE TABLE [CheckConstraintExamples].Customer
(
    [CustomerID] INT NOT NULL, 
    [Forename] VARCHAR(50) NOT NULL, 
    [Surname] VARCHAR(50) NOT NULL, 
    [Email] VARCHAR(200) NOT NULL, 
    [CustomerType] VARCHAR(20) NOT NULL, 
    [CustomerLoyalty] VARCHAR(20) NOT NULL, 
    [CustomerValue] VARCHAR(20) NOT NULL, 
    CONSTRAINT [PK_Customer] PRIMARY KEY ([CustomerID]), 
    CONSTRAINT [CK_Customer_CustomerType] CHECK
        (
               CustomerType ='Basic'
            OR CustomerType = 'Premium'
            OR CustomerType = 'Platinum'
        ), 
    CONSTRAINT [CK_Customer_CustomerLoyalty] CHECK
        (
               CustomerLoyalty = 'Low'
            OR CustomerLoyalty = 'Medium'
            OR CustomerLoyalty = 'High'
            OR CustomerLoyalty = 'Lapsed'
        ), 
    CONSTRAINT [CK_Customer_CustomerValue] CHECK
        (
               CustomerValue = 'Low'
            OR CustomerValue = 'Medium'
            OR CustomerValue = 'High'
            OR CustomerValue = 'Superspender'
            OR CustomerValue = 'None'
        )
)

Once I’d done that I thought everything would be fine so I generated another deployment script. Turns out there was one more little thing. When the SQL Server engine generated the object definition for those checks, it ordered the different predicates according to where they were in the original IN predicate, from last to first, which gives definitions like this:

ALTER TABLE [CheckConstraintExamples].[Customer]
WITH CHECK ADD
CONSTRAINT [CK_Customer_CustomerLoyalty] CHECK
    ((
           [CustomerLoyalty]='Lapsed'
        OR [CustomerLoyalty]='High'
        OR [CustomerLoyalty]='Medium'
        OR [CustomerLoyalty]='Low'
    ))
GO

ALTER TABLE [CheckConstraintExamples].[Customer]
WITH CHECK ADD
CONSTRAINT [CK_Customer_CustomerType] CHECK
    ((
           [CustomerType]='Platinum'
        OR [CustomerType]='Premium'
        OR [CustomerType]='Basic'
    ))
GO

ALTER TABLE [CheckConstraintExamples].[Customer]
WITH CHECK ADD
CONSTRAINT [CK_Customer_CustomerValue] CHECK
    ((
           [CustomerValue]='None'
        OR [CustomerValue]='Superspender'
        OR [CustomerValue]='High'
        OR [CustomerValue]='Medium'
        OR [CustomerValue]='Low'
    ))
GO

Whereas if you check my definitions in the previous code block, I’d defined them the other way around. I suspected this meant the deployment engine still thought they were different check constraints, and to test it I deployed my revised project and generated another script. Sure enough, this final script wasn’t dropping and recreating those check constraints any more, and when I regenerated the table definition in management studio, it showed all of the predicates in the order I specified.

I said above that the only way to fix the issue I could see is to re-write the check constraints using a list of OR predicates. That’s not quite true. The other thing you can do is write your constraints like that to begin with, because you’re aware of the issue, and hopefully this blog means a few more of you will do that and not have to go through the re-write task I now have ahead of me.

GREATEST and LEAST functions!

So, this is pretty cool. Microsoft have brought GREATEST and LEAST functions to SQL Azure, and will be bringing them to on premise versions in a later release (presumably that means the next release).

These work like scalar functions versions of the MIN and MAX aggregations, you write something like GREATEST(1, 3, 4, 2, 6, 7, 5, 9, 13) and it returns the maximum of the values entered (13 in this case). The article gives a few use cases, and there is full documentation for both GREATEST and LEAST in Microsoft Docs.

This is something that’s been wanted for a long time, and in its absence we’ve often resorted to some complicated CASE statements that we’d probably all like to forget, so I’m really happy it’s there in Azure and coming soon to on prem. Now I just have to convince my company that we should probably migrate of SQL 2008 at some point…

QUOTENAME and dynamic SQL

Fairly quick one today, just talking about the QUOTENAME() function.

I don’t have the usual aversion to dynamic SQL that you find a lot of developers have. I use it quite regularly as part of my ETL processes, usually to define how data will move from one stage to another. So, for instance, I might have a single Validate procedure that takes a number of parameters, including an import table name, and moves data from that import table to a valid table according to my validation rules. Or I might have a single SCDMerge procedure that takes a transformation view and merges that into a slowly changing dimension (although not using the actual MERGE function for various reasons). These procedures allow me to have confidence that data will always move from one stage to another in the same way, and saves me from writing essentially the same statement 50 times, and having to update it 50 times when I need to change the way we do something, and inevitably missing something and introducing some error.

This always feels like a pretty safe use of dynamic SQL to me, because it avoids some of the more common objections people like to raise to it:

For instance, people will often say that dynamic SQL doesn’t allow re-use of a query plan from the plan cache. This is at least partially true, but for ETL jobs where the query time is measured in minutes, the extra time to generate a plan from scratch every time just stops being an issue.

I also hear that dynamic SQL is hard to troubleshoot. This is true, you have to print out the generated script and run it and look for errors, and then figure out why your actual proc is generating this garbage code with commas all over the place and dangling ANDs in the WHERE clause. But my experience has been that it’s easier to debug this one script than it is to debug 20 different validate scripts for different tables.

Finally, we have the SQL injection risk. This is less of an issue with these scripts as I define a lot of the parameters that get passed in as part of the overall solution. There are some parameters, however, that do get defined externally. For instance, when I import a file, I generate a table with that file’s name and base the columns in that table on the columns in the file. This is still vulnerable to SQL injection if the person sending the file is crafty with a column name somewhere (probably a pretty minor risk as the people sending the files are either from elsewhere in the organisation or clients, but you always need to be careful).

That is where QUOTENAME comes in. This simple function takes any object name and escapes it with square brackets. It also goes through the name and doubles any close square brackets. So something like ‘ColumnName NVARCHAR(MAX)])DROP TABLE dbo.Customer–‘ which might cause some issue if allowed in unsanitised becomes ‘[ColumnName NVARCHAR(MAX) ]])DROP TABLE dbo.Customer– ]’ and therefore creates a column called ‘ColumnName NVARCHAR(MAX)])DROP TABLE dbo.Customer–‘. Now, this might not be exactly what you want but at least it’s not dropping the customer table, and you will probably notice this odd column name and start asking some difficult questions of whoever sent you that file.

Something to note is this only works on valid object names that conform to the SYSNAME data type, i.e. the object name cannot be more than 128 characters. Because of this it returns data as NVARCHAR(258), as that is the maximum length the escaped string could possibly be (if every character is a close square brackets they will be doubled to give 256 characters and the resulting string will be wrapped in square brackets to give a total of 258 characters).

One more thing you can do with this function is specify the escape characters. as a second input (as we have seen, if you don’t do this it defaults to square brackets). So, if you are using double quotes instead of square brackets you would write QUOTENAME(@ObjectName, '"'). This would then wrap @ObjectName in double quotes, and double any double quotes found in the string. This second input only takes a single character, so if you are using any form of brackets it allows you to input either the opening or closing bracket, your choice. So, you could write QUOTENAME(@ObjectName, '>') or QUOTENAME(@ObjectName, '<') and either way you end up with @ObjectName wrapped in angular brackets and any instances of ‘>’ will be set to ‘>>’. There’s a limited number of acceptable inputs for this parameter: single quotes, double quotes, regular brackets (parentheses), square brackets angular brackets, curly brackets, or a backtick (`).

I’m sure there are some other uses for this function beyond dynamic SQL, but that’s the main thing I use it for. I’m currently looking at writing more of my dynamic SQL using C# (at which point I guess it becomes generated SQL instead), and I will need to figure out a C# alternative to QUOTENAME if I’m going to make that work safely.

T-SQL Tuesday #137: Using Notebooks Every Day

This is my 8th TSQLTuesday, and this month Steve Jones has invited us to write about Jupyter Notebooks.

I’ve been aware of Notebooks for a little while, but didn’t really know what they were or how to get started with them. Then I attended a talk a few years ago by Rich Benner on Azure Data Studio, which was really interesting in its own right, but also included the ability to create and edit notebooks.

After that I got a bit more interested, and had a bit of a play and did a bit of research. I could see a few use cases for them that other people had written about, things like creating a notebook to pass to the DBAs/admins to run on live and save with the results in as a way to easily do some basic troubleshooting or analysis on a production system

This seemed really appealing at the time, as I was working somewhere where devs were expected to troubleshoot issues on live databases without being able to access them. However, the organisation I was part of moved very slowly and the chances of notebooks or ADS being adopted any time soon was pretty slim so that was, unfortunately, a bit of a non-starter.

Since then I’ve continued to have a play every once in a while, but I’ve never been anywhere where the people around me were much interested, and without that it’s quite hard to get traction with this sort of thing.

I have found one use, however, and that’s as a set of notes for myself.

Recently I was looking to explain to someone about some of the newer syntax that’s available in T-SQL (and when I say newer I mean anything post 2008). I did a quick bit of research and realised there was plenty that I’d forgotten about and didn’t use even when it could be useful for me, so I set up a notebook to very briefly explain the new functionality and include some basic examples. It’s not complete, but you can find it in my GitHub repo.

Going forward I plan on adding a few more notebooks there to help me keep track of any other bits of syntax that I either forget exist or have to research every time I need to use it. I’m thinking one for XML and another for JSON might be really handy, as well as one for all the aggregation/windowing function options. If I can get motivated enough (and that can be a big if) this will hopefully grow into kind of a mini books online, but personalised for me to help me remember the things I need to know.

Error message: “Adding a value to a ‘date’ column caused an overflow”

This is an error I ran across a little while ago, and thought it was worth a quick blog post (as a reminder to myself as much as anything).

The error came about because I was working with a date dimension, and I knew I would encounter cases where the date wasn’t exactly known but was known to be some point in the future. To try and handle these cases so that any WHERE SYSDATETIME() < DateColumn predicates would work correctly, I included a row for the 31st December 9999 in the date dimension. I ran some data into the table and tested a few simple queries and everything worked, but soon after a lot of other queries started to fail with the “Adding a value to a ‘date’ column caused an overflow” error.

This happened because I had some queries where I was joining to the date dimension and trying to return the next date with a DATEADD() function. Even though I was only running this on tables where the dates were in the past, SQL Server was starting the query execution by calculating the ‘tomorrow’ date against every row in the date table before joining the results to the main fact table. This resulted in the query trying to add a day to 9999-12-31, which would put the date out of range of the DATETYPE2 data type (and any data type).

Once I realised what was happening I removed the row from the date dimension and shifted to using NULL for the ‘sometime in the future’ DateID. This meant putting a bit of extra work into the query, but everything runs now.

As a bonus, it’s clearer what is happening. NULL is the correct marker to use in these scenarios, because it represents an unknown value. You may be able to get away with using a default value instead, but there’s always the danger that some future developer won’t realise that’s what’s happening and write code that will treat the default as if it’s a valid value.

SSDT database projects: deployment error messages

Today I’d like to talk about three error messages you might see when deploying an SSDT database project, either through Visual Studio or via a dacpac and script. I’m going to focus here on what you see from inside of Visual Studio, but you will see similar errors returned when you deploy using a script and the reasons behind them will be the same.

Chances are whatever the error is, you will start off with something like:

This appears in the Data Tools Operations tab, usually at the bottom of the Visual Studio window

From here, you need to click on the View Results link inside Visual Studio, which brings up a tab with the release script in the main window, and a Message tab in the results window. The message tab is what we’re interested in, it will show you the feedback from all the print commands, as well as the actual error that came back when the publish process tried to run the script against an actual database.

Regular errors

These will almost always be something to do with aggregation. For some reason the IntelliSense inside of Visual Studio doesn’t catch aggregation issues, and the build process lets them through as well.

The Message tab will probably look something like this:

And what you need to do is scroll right to see the end of that long line I’ve cut off in that snapshot:

As expected, I’ve included something in a SELECT and not in the corresponding GROUP BY, and IntelliSense hasn’t spotted that until we’re actually trying to deploy the code. Easy fix here, add the column to the GROUP BY or, more generally, fix the T-SQL error.

The included file does not exist

This is more usually something you will find when building the project, but sometimes it can sneak in at deployment stage as well. It’s something that happens when you’ve used a file reference in a pre/post deployment file and the referenced file doesn’t exist. This should just appear in your error window like this:

This really is a top quality error message, it tells you the file that’s missing, it tells you the file that contains the reference, and even the line in the file with that reference. Simple solution here, figure out what’s going on with the reference and either make sure the file is there or remove the reference.

Changes to [X] may introduce runtime errors in [Y]

This error occurs because you have an object [Y] in your target database that you have not included in your database project for whatever reason. [Y] includes a reference to another object, [X], that you have included in your database project and that also exists in your target database.

The current deployment will make a change to [X], and the deployment process can see that there in an object [Y] in the target database that may be affected by the change. It cannot, however, see if [Y] will still be in a valid state after the change to [X], because [Y] is not in the database project.

This is not something you can turn off, or work around. The only way to fix this is to bring the project and the target database in line, either by deleting [Y] on the server or bringing [Y] into your project.

This originally started out as a couple of slides in my New Stars of Data talk, but got cut due to time constraints. I was sad to get rid of them because there’s some useful stuff there, but they didn’t quite fit the theme of the talk and so were dropped in favour of something that did. Anyway, I thought they might make a useful blog post for someone.

New Stars of Data feedback

As mentioned in my last post, I gave my first ever conference talk at New Stars of Data last Friday. Recordings of all of the talks (including mine) are now available on YouTube, and all of the slide decks are available in the NSoD GitHub, or you can find mine here. I’ve not yet been able to listen to mine, but I will manage it someday (my voice always sounds so strange when it’s recorded).

I thought the session went very well, and I had a few nice comments at the end, but I was still more than a little nervous about the feedback form.

As it happens I did pretty well…

If you can’t see them in the picture, they are:

Demos: 4.3, Overall Session: 4.71, Presentation Style: 4.57, Session Matched Abstract: 4.77, Session Met Expectations: 4.69, and Speaker’s Knowledge: 4.93

I don’t want to brag, but I’m ridiculously happy with these scores. The Speaker’s Knowledge and Overall Session scores in particular just blow me away.

Then, looking at the comments…

Again, really couldn’t be happier.

A few highlights:

This session blew my mind. I did not know that these things are possible with SSDT even though I played with it before. Thank you Chris!

Awesome job Chris! I love it when I can learn new things and you knocked it out of the park. Very well done. You have great pace, tone, & answered questions well. Keep it up and I look forward to more sessions from you.

Excellent session. Interesting topic with clear understanding, clearly expressed.

Feedback

Clearly there’s one person who wanted more demos, and it feels like they wanted in general more of an introduction to Visual Studio database projects. I feel bad for them because I want everyone to get something out of the talk, but it’s not an introduction session. I’m seriously considering putting together an introduction session as a companion to this one, but this session can largely stay as it is. The main thing I might tweak is the abstract, to make it clearer that this is an advanced dive into the topic.

That might seem a bit harsh, but I only have an hour to get everything across. I did want to include some demos in the presentation but that would have meant cutting out 10-15 minutes of talking, which maybe means I don’t get to the really good stuff. Some conferences only have 50 minute slots for speakers so I may need to cut this talk down a bit if I want to try and speak at them in the future (spoiler alert: I definitely do want to try and speak at some of them, speaking was awesome and I very much have the bug now).

How I built the talk in the first place

Part of the reason I know I can’t do everything is because of the way this talk evolved in the first place. I wanted to give a talk at my local user group a couple of years ago, and settled on a general overview of best practices, or things we can learn from the .NET world. This was largely because I’d been working with .NET developers for a few years, and had noticed a lot of ways in which their best practices seemed more embedded than they did in the SQL Server world. I’m talking about things like source control, unit tests, versioning, all kinds of stuff.

Anyway, that first talk was fairly well received even if I was a stuttering mess throughout, and I decided I wanted to do another. This time I figured I’d follow up on what I’d done before and talk in more detail about Visual Studio database projects as an option for source control, as well as a few of the other interesting things you can do with them.

That went a bit better, but I still felt like it wasn’t great. Then I heard about New Stars of Data, and checking the entry requirements I saw that having spoken at a user group before didn’t disqualify you (I guess they were looking for anyone who hadn’t spoken at conference level). I applied with an abstract about what you could do with database projects besides just putting your project under source control, with the aim being to build on the previous talk.

Long story short, I got accepted, and worked really hard with my mentor to try and make a good talk. At the start I figured I’d cut out some of the stuff from the last talk and add a demo, but after Rob saw my slides he said something along the lines of “We have a lot of work to do” and I knew I’d need something a bit more drastic. We ended up with something that looked completely different to the previous presentation, with at least half the content gone, and I needed to cut that. If I hadn’t, if I’d tried to give the introduction talk, I wouldn’t have had the time I needed to really dive into the good stuff that ended up impressing people. As it was, I could definitely have spoken for longer on some of those advanced features, and would love to give the 2-4 hour version that one of the commenters requested! Unfortunately conferences don’t often have slots that long, so I have to pick and choose.

I remain incredibly grateful to the conference organisers, to Rob, to everyone who showed up to listen to me, and to everyone who left feedback. My hope is I can kick on and make this the start of my speaker journey.

New Stars of Data

I gave my first ever conference talk yesterday at New Stars of Data. This is a conference specifically for new speakers, where you get paired with a mentor who gives you advice and works with you to make your talk better over the months before the conference. It started out as a new speaker track at DataGrillen, but with the main event being cancelled due to Coronavirus it’s shifted to being a separate conference for the last couple of years.

I can honestly say the experience was fantastic. My mentor was great (Rob Sewell, also known as Beardy McBeardface or DBAWithABeard), and I learned a huge amount from him over the couple of months we were working on the talk together. I also got some really positive feedback from some of the more experienced presenters watching, so that’s really good as well.

I can’t recommend enough that you apply for this if you’re thinking about getting into speaking. Hopefully next year the DataGrillen conference will happen, and I think at that point this will be folded back into the main conference, but you will still have the support and mentoring I had. I’m definitely going to try and speak at more events, and I can’t think of a better way to get started on my speaking journey than this. Huge thank you to everyone involved.

T-SQL Tuesday #135: The outstanding tools of the trade that make your job awesome

This image has an empty alt attribute; its file name is tsqltues.png

This is my 7th TSQLTuesday, and this month we’re all invited to blog about the tools we use in our job that we are particularly attached to.

As it happens, I’m giving a talk in just over a month at New Stars of Data on a tool I think a lot of the community overlook: Visual Studio Database Projects.

I’m not going to regurgitate the whole talk here, that would take far too long, but I will give a quick preview of what’s covered on a couple of slides that cover what I like to call the “quality of life” improvements, small things that just become easier when coding through a Visual Studio project rather than in Management Studio or similar.

Renaming and moving objects

Usually we avoid renaming anything in SQL Server. You don’t know what other bit of code might be lurking out there that depends on the object you just renamed, or when that code might be called, and so spelling errors end up being left in for years because remembering it’s the Customet table not the Customer table is easier than having the application fall over at 3am because we missed a reference somewhere in our code base.

Database projects gets rid of that fear. If you rename an object, or move it from one schema to another, Visual Studio automatically propagates that change through your solution. And even if there’s some reference somewhere it didn’t recognise as needing updating, the intellisense-style validation will cause the next build to fail and tell you the database is no longer fully functional until you fix the issue. You do have to remember to rename through the interface rather than in the scripts, but in exchange you get the peace of mind that comes with knowing you’ve not broken anything with your change and you can finally query the Customer table.

All of your scripts are object creation scripts

This is a bit of an odd point but bear with me, it leads to something good.

Every script you write in a database project is the script to create the object from scratch. It isn’t a generated script from the object, which is how you view objects in Management Studio.

The reason this is significant is it means you can add comments to objects you wouldn’t normally be able to add comments to, like tables. This means you can add a comment next to a column name and give a decent explanation of what it means, or add a description at the top of the table script to say exactly what each row in this table represents, and why the table exists.

If you’re encrypting your code, which you might do if you are selling a managed service that gets deployed to the client’s servers, you can also store the unencrypted code in your database project. If you try to retrieve the code from Management Studio you won’t get anything (that’s the point of encrypted objects) but Visual Studio just holds the create script, not the object itself.

F12 to jump to object

Finally, if you highlight an object in your code and hit F12, you will open another tab with the code for that object. No more hunting around in the database for the view definition, now you can just open it up instantly. I found this incredibly useful once I got into the habit of it, but I have to warn you that it only works if the code you’re currently in is error free.

So there you have it, three nice little features in Visual Studio database projects that can make a real difference to your coding. I hope this was helpful to someone, and I’d encourage anyone reading this who isn’t a TSQLTuesday regular to give it a go next month.